Skip to main content

Over the last few years, artificial intelligence has become a buzzword in the world of technology. While this term is fairly familiar to the general public, I have realized that very few of us understand just how pervasive this technology has become in our daily lives, let alone how much it has the potential to take over the world as we know it.

This summer, I am working at the American Association for the Advancement of Science on a project to understand ethical AI development. I am currently in the process of reading the literature surrounding ethics and AI development, which has been a fascinating experience thus far.

While computer scientists have been working to develop machine learning technology since 1955, the dissemination of AI into society was not possible until cloud-based computing was introduced in the last decade. This mass exodus of AI into society has ushered in what technology companies are deeming the “fourth industrial revolution.” The cloud has essentially burst the floodgates that were holding back 60 years worth of technology, leaving us bewildered and unsure how to regulate AI both nationally and globally. Now, more than ever, we must decide how to control this technology so that it does not begin to severely violate human rights.

In my readings, I have learned how companies and governments are working to respond to this industrial revolution; however, there is some disconnect between how different spheres of influence believe we should go about regulating this technology. Because of this disconnect, developers are currently operating on their company’s own set of ethical principles with very little regulation. AI is poised to become a massive cause for economic change throughout the world by altering the workforce in nearly every industry. By 2020, more than 800 million people will need to learn new skills for their jobs and two-thirds of students today will work in jobs that do not yet exist.

This rapid change leaves us with many questions. What happens to the workforce that becomes displaced by AI machine automation? How do we ensure data privacy as AI invades our daily lives more and more? How do we ensure human safety when creating jobs where workers must work alongside AI machines or as AI grows increasingly commonplace in medicine? What are the steps that must be taken to rectify the bias that has already been shown to be built into these machines? We are currently at a turning point in history, as it has become time to take a hard look at how we can preserve human rights in an increasingly technological and data-driven world.

I am excited to continue learning about the state of AI in society and diving deeper into understanding its implications for our lives in the future. While I have only just begun understanding the scope of this technology, I am already in awe at the advancements we have the potential to make with the power of machine learning.