When new technologies become widespread, they often raise ethical questions. For example:
- Weapons — who should be allowed own them?
- Printing press — what should be allowed to be published?
- Drones — where should they be allowed to go?
The answers to these questions normally come after the technologies have become common enough for issues to actually arise. As our technology becomes more powerful, the potential harms from new technologies will become larger. I believe we must shift from being reactive to being proactivewith respect to new technological dangers.
We need to start identifying the ethical issues and possible repercussions of our technologies before they arrive. Given that technology grows exponentially fast, we will have less and less time to consider the ethical implications.
We need to have public conversations about all these topics now. These are questions that cannot be answered by science — they are questions about our values. This is the realm of philosophy, not science.
Artificial intelligence in particular raises many ethical questions — here are some I think are important to consider. I include many links for those looking to dig deeper.
I provide only the questions — it’s our duty as a society to find out what are the best answers, and eventually, the best legislation.
1. Biases in Algorithms
Machine learning algorithms learn from the training data they are given, regardless of any incorrect assumptions in the data. In this way, these algorithms can reflect, or even magnify, the biases that are present in the data.
For example, if an algorithm is trained on data that is racist or sexist, theresulting predictions will also reflect this. Some existing algorithms have mislabeled black people as “gorillas” or charged Asian Americans higher prices for SAT tutoring. Algorithms that try to avoid obviously problematic variables like “race”, will find it increasingly hard to disentangle possible proxies for race, like zip codes. Algorithms are already being used to determine credit-worthiness and hiring, and they may not pass the disparate impact test which is traditionally used to determine discriminatory practices.
How can we make sure algorithms are fair, especially when they are privately owned by corporations, and not accessible to public scrutiny? How can we balance openness and intellectual property?
Article from: tem.fi