Artificial intelligence is becoming more and more prevalent in modern society. While a majority of Americans still don’t know exactly how AI works, they use it on a daily basis. Whether you are navigating through a new city or composing an email, AI serves as an important tool in helping you execute daily tasks. AI can present itself in numerous ways such as natural language processing, machine learning, and computer vision) which allows it to have massive potential and revolutionize a multitude of industries.
Corporations such as Google, Netflix, and Uber are all utilizing machine learning to improve their products. Similar to how AI can be presented in a variety of ways, there are different types of machine learning (ML), which is essentially a computer learning from experience to make predictions. There are three main subsets: reinforcement learning, supervised learning, and unsupervised learning. Supervised learning is sort of like having a teacher but the machines learn with a training dataset that includes a set of input and output labels. On the other hand, unsupervised learning is when machines comb through data to find patterns they can use. Reinforcement learning is when machines are rewarded or penalized depending on their answer. Uber uses these algorithms by learning from external factors like traffic and then predicting how long it will take for your ride to arrive. Spotify, Apple Music, and Netflix all use ML to give you music and TV recommendations. American Express uses these algorithms to detect fraud in the millions of credit cards they have issued.
Another type of machine learning is Natural Language Processing which is another powerhouse that helps numerous tech giants develop their products. For instance, the Hello Barbie kids play with utilizes ML and Natural Language Procession (NLP) in order to have the toy listen and respond to a child. Google Translate uses NLP to provide language translations. While Google Translate isn’t accurate in terms of grammar and sentence structure, more advanced NLP algorithms are helping to increase accuracy. Search engines such as Google and Yahoo are able to predict search results because of NLP. One of the biggest applications of NLP is email filters (spam, inbox, other folders) that sort our email so that our inboxes stay organized.
In addition to these forms of artificial intelligence, computer vision is also an extremely important asset for companies. Computer vision essentially enables a computer to process and identify information. Amazon used computer vision to change the retail industry when it launched Amazon Go. People can go to the store and walk out without having to check out because they will automatically be charged through their Amazon accounts. In the healthcare industry, hospitals around the country are having physicians spend hours going over patient data. Computer vision can help doctors by going through the data for them and flagging important details saving countless hours. One major application of computer vision is in the automotive industry with self-driving cars and autopilot. Unfortunately, computer vision is still not as accurate as a human’s vision which is why its applications and impact are limited.
More and more tasks are becoming easier to do with the help of AI. While most of us associate Artificial Intelligence with being complicated and hard to understand, we’ve all actually been using AI for years. As technology develops and evolves, more of our daily routines and tasks will involve AI.
Image credit: Sara Kurfeß on Unsplash
Have you ever “talked” to Siri? Used a spell-checker? Taken advantage of Google Translate in your foreign language class? If so, you’ve experienced the power of natural language processing.
What is natural language processing?
Natural language processing (often shortened to “NLP”) is a subfield of artificial intelligence where software is taught to interpret or replicate human language. Some of the goals of NLP include:
- Determining the meanings of words and phrases in the context of, say, an article
- Transcribing spoken words into writing
- Extracting the themes, moods, or other attributes of a piece of writing
These goals can be approached in a number of ways—for example, by using machine learning algorithms.
What are some applications of NLP?
For those with access to the technologies mentioned at the beginning of this article, NLP has been revolutionary, reshaping their everyday lives by helping them carry out their everyday errands more conveniently and efficiently. But NLP has also been the subject of promising research at the forefront of the AI field, leading to its application to a variety of tasks. Here are a few examples:
- NLP for content moderation: Many companies are already using NLP to monitor and regulate content published to their digital platforms—for example, online forums—in an effort to reduce violent or hateful speech.
- NLP for political analysis: NLP can be used to parse and evaluate large databases of political texts (including Tweets!) in order to determine trends—for example, voter behavior.
- NLP for research paper generation: A team at MIT developed a program called “SCIgen” which, using NLP, writes “random CS research papers”.
What are some limitations of NLP?
As with any other AI innovation, NLP has its limitations, along with a fair share of ethical concerns:
- NLP can perpetuate pre-existing biases. Last summer, I attended a talk by Ayanna Howard, an accomplished professor and roboticist. During the talk, Dr. Howard described the struggles her team faced in building a robot capable of travelling over snowy terrain. These struggles stemmed from a simple problem: no one on their team had much experience with snow! Dr. Howard’s anecdote illustrates an important point: when technology (including NLP software) is developed by a “biased” group of technologists, it can perpetuate those biases. NLP can reflect the biases in language pervasive in NLP datasets (for example, the association of certain adjectives to certain groups of people).
- NLP for content moderation can be ineffective or lead to censorship. Our current NLP cannot grasp many of the nuances of human language—for example, sarcasm. And some worry NLP may ban posts which would actually be protected under free speech, while missing posts crafted with truly malicious intent.
I want to get involved with NLP. What do I do?
There are a number of ways to get involved! Here are some suggestions:
- NLP-related projects: If you have the means, reach out to local professors to ask about conducting NLP research under their supervision! You can probably find a list of ongoing projects on a professor’s website; look into those that interest you. If you’re unable to pursue NLP research, consider looking into online projects, including open-source projects, that you may be able to contribute to.
- NLP-related courses: Head over to our ‘Resources’ page for a list of over 150 links to various digital learning websites! Many of these websites offer free and open NLP courses. Or, head to your local library to check out an NLP book!
Hope this article helped you learn something new about NLP, and maybe even gave you an idea for your next project. Happy coding!
Image Credit: Matthias Graben/Getty Images
From the face unlock feature on our smartphones to smart security cameras in homes and businesses, the applications of facial recognition software have become a part of our everyday lives in the past few years. Surely, facial recognition provides a wide range of benefits that help solve some of the most prominent issues in our society. But how much are we told about the potential harm that this technology can cause and the drawbacks that too often go unnoticed?
For instance, Amazon’s facial recognition system recognized Jimmy Gomez, a Harvard graduate and one of the few Hispanic lawmakers serving in the U.S. House of Representatives, as a potential criminal. Google Photos once even labeled two black people as gorillas. In fact, multiple studies have proven that the technology is inaccurate at identifying people of color, especially black women. According to researchers at the MIT Media Lab, facial recognition systems made by IBM, Microsoft, and Face++ misidentified the gender of 35% of dark-skinned women, in comparison to 1% of light-skinned men such as Caucasians who were wrongly labeled. One reason for this may be that the public photos used by tech companies to train these computers could include more white individuals than minorities, or that the engineers who are predominantly white men may unknowingly be designing the systems to recognize certain races better than others.
As these issues with racial bias have become increasingly evident to major tech companies, they have been steadily working towards improvement. Last year, Microsoft was able to decrease the error rates for identifying darker-skinned men and women by up to 20 times, while IBM released a million-face dataset called ‘Diversity in Faces’ that analyzed more than just the basic features of age, gender, and skin tone.
However, even as these tech giants continue to improve the accuracy of their facial recognition systems, concerns still exist in the way that they can be used to discriminate against minorities. The main reason behind this issue is the lack of federal regulations around the use of these technologies, as there are concerns that this could lead to the violation of the general public’s basic civil rights and privacy if the capabilities of facial recognition are abused by law enforcement in order to track the public. Citizens and politicians in many cities have already begun to voice their opinions on this matter; as a result, some states in the United States have banned the use of facial recognition in police officers’ body cameras, while in cities throughout California and Massachusetts, specific uses of this technology for city officials have been outlawed.
Regardless of the privacy issues that surround the use of facial recognition, it is important that we do not forget about its tremendous benefits in society, whether it be its use in helping police identify criminal suspects and missing people, or its role in advanced features of many apps today. But most importantly, we must constantly remind ourselves of the possible limitations that come with this imperfect technology to avoid putting too much of our trust into it.
Image Credit: Luca Bravo on Unsplash
Do you have an idea for a research project? The MIT Think Scholars Program may be the perfect opportunity for you. This specific program works to guide high school students on their research projects and ideas. THINK project proposals can be a science, technology, or engineering idea that can be completed in four months within a budget of a thousand dollars. In order to have a winning proposal, you must have the procedures planned out. Winners will receive guidance from MIT undergraduate students and professors. They will also have access to weekly mentorship meetings for their project.
The proposal should follow the format presented on the MIT Think Guide here: https://think.mit.edu/static_files/THINK_Program_Guidelines_2018_19.pdf . In short, it should contain six parts: the project title, abstract, idea, plan, personal, and references.
When creating a research project, here are some to follow:
If you are interested in signing up for this opportunity make sure to submit your application to https://think.mit.edu/ by January 1, 2020. The winners for this program will be announced on January 15, 2020. Using this link, you can also check out the ideas proposed by past winners.
Image Credit: Wallperio.com
Today, students are encouraged to go to college more than ever, with parents and teachers claiming that a post-secondary education is a necessary step towards a good job and success in the future. According to the Pew Research Center, the number of college applications increased by 21.4 percent between 2002 and 2017. Thus, as the applicant pools to competitive colleges increase, colleges become increasingly selective of students. This has prompted many students to find ways to become more competitive in the college selection process, oftentimes by self-studying certain subjects and taking courses online.
When it comes to self-studying, there are a variety of available resources. One of the most common resources for studying for tests are test guides or prep books such as Barron’s or the Princeton Review. However, these books can be very expensive, ranging from $20 to $40. To avoid the cost, books can be borrowed from libraries, bought second-hand or shared between peers taking the same tests. These prep books provide a variety of strategies to succeed in specific topics, ranging from test-taking strategies to condensed material that is easy-to-follow, often accompanied by charts and other graphic organizers. They also include practice tests which are similar to official standardized tests and are a good way to get used to the formats of different tests. Aside from tests, students also study various subjects on their own, out of interest or to get an edge in school. An effective way to study outside of school is through various tutors. Peers who have previously taken certain subjects can make good tutors; they can sometimes even have an edge on teachers, since they know the feeling of not understanding a topic and may be able to better communicate topics in an easy-to-understand manner. Tutoring services have also been on the rise, ranging in classes to supplement school curriculum to classes meant to teach the entirety of the material for a subject over the summer or on weekends.
The internet can also be a good place to find study materials. One popular online source to complement what is learned in school is Khan Academy, which offers free online courses and other tools for students. According to the New York Times, Khan Academy has over 10 million users worldwide with over 5,000 courses. The videos on the website are concise and easy-to-follow, making it a good resource for students who may be struggling with a subject or need to study before a test. There are other available resources online when it comes to learning general skills, including Udemy. Udemy is a platform where online instructors can construct courses in their topics of interest, uploading resources such as videos, Powerpoints and PDFs. While Udemy offers over 130,000 courses in a variety of categories including design, management and digital marketing tactics, though some of the courses may not be free.
Some online resources are geared towards specific subjects. Duolingo is a free language-learning platform and offers 90 courses in 22 languages. It even offers fictional languages High Valyrian from Game of Thrones and Klingon from Star Trek. Duolingo provides lessons on grammar and vocabulary then tests users on the material. It functions much like a video game, using a reward system for in-game currency that can be used for character customization. Another subject-specific learning platform is Codecademy for learning how to code. It offers free courses in programming and markup languages. It provides specific tracks for each language, starting with the ubiquitous “Hello World” lesson and moving on the more complex topics.
Students can also take full courses for credit online. Online classes provide students with opportunities to take classes that are not offered at school or can let students skip classes at school. A report from the Brookings Institute explains that online classes are beneficial in that it allows easy access to education for students. However, the report also finds that online courses have higher drop-out rates than traditional schools. While online courses currently face drawbacks, there is potential for improvements in online classes in the future. This includes the incorporation of artificial intelligence to personalize the teaching to the student; with developments in artificial intelligence, online classes can match the pace of each individual’s learning speed and account for prior knowledge in a subject.
As colleges become more competitive, there are increasing numbers of resources that are available to students to give them an edge when it comes to education, both online and in-person. Furthermore, resources like Udemy and Duolingo are good for students who just want to further develop an understanding of different subjects and skills outside of school.
Pew Research Center
New York Times
Image Credit: Interior Design Magazines
From years of wasting away my life on YouTube, I have often heard creators complain about the “YouTube Algorithm” and how it damages their career by demonetizing them or not recommending them. But what is the YouTube Algorithm and how does it work? Is it even an algorithm? By exploring YouTube and the mechanisms by which YouTube recommends videos, these questions and more can be answered.
The YouTube Algorithm has evolved over the years. Before 2012, it focused simply on view count; videos with more views would be recommended to more viewers. However, this lead to the problem of clickbait where creators added purposefully catchy titles without actual substance in their videos. And so, YouTube changed its algorithm to account for view duration, or watch time and time spent on the platform or session time. This caused creators to delay the time taken to deliver on promises that the video’s title makes. The algorithm’s changes also led to creators being obliged to make high quality videos while increasing the rate at which they were produced. People could not make high quality, lengthy videos. It also explains why so many popular YouTubers at this time were gamers as they could produce long videos in short periods of time without a lot of editing.
From 2016 onward, YouTube changed its algorithm again, releasing a lengthy paper describing how the new process works. In their new system, YouTube employs deep learning to improve their recommendation process. YouTube is a platform with 300 hours of content uploaded every minute. To sort through all of this data and find specific recommendations for each viewer is why two neural networks are needed: one for candidate generation and one for ranking.
The candidate generation network sorts through billions of videos and provides broad personalization using collaborative filtering. This network takes events from the user’s history and retrieves a small subset of a hundred videos. Data such as IDs of video watches, search query tokens, and demographics are used.
The ranking network then has to filter through these hundreds of videos and rank them according to what the viewer is most likely to click on. It does this by assigning a score to each video using different features describing the video and the user. The highest scoring videos are then shown on recommended pages.
Even with this highly specialized system, YouTube receives criticism about it being a “misinformation engine” which radicalizes viewers by showing them conspiracy theories, fake news, and other disturbing content. YouTube keeps their algorithm close to their chest, so it is difficult to understand why this happens. However, it has become increasingly clear that disturbing videos are recommended more.
YouTube is constantly changing its model with new input from viewers and creators. In 2017, they supposedly began to improve the quality of videos by preventing inflammatory videos from popping up. In 2018, they added their controversial monetization policy, where clips can be eligible for making money depending on their content. This was meant to reduce the amount of content creators the platform had to actively monitor because YouTube has strict policies for what videos can get monetized. And yet, CNN reported that popular brands including Adidas, Cisco, and Hilton still had their ads running on extremist videos. This year, YouTube announced that it would be banning “borderline content” which could seriously harm or misinform viewers. The effects of this feature are still uncertain.
Essentially, YouTube uses an incredibly complicated “algorithm” which is made up of multiple components. Every YouTube video that you watch is delivered to you with a lot of metadata behind it. Now that’s something to think about the next time you scroll through your recommendations page.
Image Credit: Pixabay
Artificial intelligence, or AI, has marked a revolutionary turning point within our society. This dynamic and emerging field of technology has enabled us to interact with each other in unimaginable ways. If you looked around, you would come to see just how often our routines intertwine with the possibilities and opportunities of machine learning algorithms, cloud computing platforms, virtual reality, and image processing. With everything ranging from healthcare and environmental sustainability to education and transportation, AI introduces the promise for an efficient and technologically advanced lifestyle.
Take Google Translate, a multilingual machine translation service developed by Google. Whenever we’re in need of a quick translation, we resort to pulling out our phones, recording a voice, using live translation on an image, and watching the magic happen. This is the work of deep neural networks and a method in which computers are programmed to analyze a variety of languages, called natural language processing (NLP). However, with the many prospective benefits of such advanced machinery, comes a darker side. A frightening dilemma soon emerges, commonly referred to as “deep fakes”, which gains the attention of news headlines.
Deepfakes, a form of combining existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network (GAN), introduces us to the dangers and threats of media manipulation. GANs are able to receive photos and videos of a person, typically in extremely large amounts, and are “trained” based off of these inputs. They are then able to generate new images and videos that look nearly identical and indistinguishable from the original content. This new form of image altercation makes us question whether what we see is indeed real, and raises questions concerning the credibility of anything seen on the media such as news sources, online platforms, and politics. Since anyone has the ability to find resources to produce a manipulated video, deep fake technology undoubtedly opens doorways for malicious intent, public shaming, identity theft, and fraud.
The rapid pace at which deep fake production is growing is concerning, considering its capability to influence politics by unauthentically framing and exploiting one’s words and actions. Currently, there are around a whopping 14,678 deepfakes on the internet—and counting, according to CNN. It was also found that individuals and businesses have begun to make custom deepfakes for buyers and sell them for profit. So, what is being done to combat the rising exploitation of deepfake technology?
The Pentagon, through partnership with the Defense Advanced Research Projects Agency (DARPA), is working to hinder the spread of deepfakes with researchers and universities by finding ways to train computers to identify them. Additionally, organizations such as Deeptrace aim to re-establish trust in visual media by detecting and monitoring deepfakes using deep learning.
Sadly, deepfakes may be a problem that is getting too out of hand for the work of companies and organizations. In an article by Digital Trends, Luke Dormehl elaborates on why tech companies are ill-equipped to tackle this problem. He says that that deepfake technology is becoming increasingly better, and the inconsistencies present within earlier deepfakes have now been fixed. With the rate at which these visual reproductions are being created, it is nearly impossible for researchers to keep up. According to the Washington Post, Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley, states that “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”
Larger companies are making an effort to combat this such as Facebook, who announced a $10 million ‘deepfakes detection challenge’ according to VICE. The challenge is expected to take place in December, where Facebook will release a data set of faces and videos for the development of methods and technologies that can detect an algorithmically generated video.
As of now, there is not much we can do as individuals to know whether the next audio we hear or the next video we see is 100% unaltered. But, educating and spreading awareness of the threats and dangers associated with such a rapid and promising period of technological growth is always important to live by.
Think of the term “Artificial Intelligence”. What comes to mind? A perfect futuristic world with self-driving cars, towering skyscrapers, and robots? Giant robots that are planning on taking over the world. Whatever it may be, many people think of a world with Artificial Intelligence as one that seems far off in the future, perhaps only achievable by the year 2500. However, there are already applications of Artificial Intelligence present in our current lives, whether we realize it or not.
Before we get into its applications, what is Artificial Intelligence? According to Techopedia, it is an area of Computer Science that emphasizes the creation of intelligent machines that work and react like humans. In modeling the human mind, Artificial Intelligence uses neural networks to train machines to think and act rationally. The initial idea of Artificial Intelligence was proposed by Alan Turing, a mathematician that helped break the Nazi encryption machine Enigma during World War II. Stemming from the question: “Can machines think?”, he wrote a paper called “Computing Machinery and Intelligence” (1950) which laid the foundation for what Artificial Intelligence was to become.
So how is AI used in our lives? One simple example would be on Gmail, where there are tabs at the top that filter your email into Primary, Social, and Promotions. Gmail uses AI to filter out your emails and to make sure that the emails that you actually see are authentic.
A similar email feature that is used in other email services is the spam filter where algorithms are in place to detect and filter out any spam mail.
Whenever we are going online, we are unconsciously interacting with AI by feeding in our data. When we go on YouTube and click on the videos that we want to watch, that information is plugged into YouTube’s algorithm, which is used to recommend videos to us. Similarly, when we search for a product online, Google’s algorithm learns from that information and generates ads on other websites for products that it thinks that we are interested in. According to Forbes, artificial intelligence has the potential to offer $15.7 trillion to the global economy by 2030 through sales generated by specific pop-up ads.
The “Internet of Things” (IoT), a network of interconnected devices that exchange data, is full of information. Artificial intelligence can go through all of that data and learn new information to make our lives easier. One practical application of this today is Google Maps, which approximates the time for you to get to a distance and how long it will take you to get there. By having the location data to your smartphone the app can compare the location of your device from one point in time to another, calculating how fast or slow you are traveling. Thus, it can determine the pace of traffic in real-time. Combined with incident reports, it can better predict how long it will take you to get from Point A to Point B.
Despite not realizing it at times, we are always interacting with Artificial Intelligence when we go online. With advanced training of neural networks through larger quantities of data, computer scientists will be able to apply AI to more aspects of our lives and make our lives easier. Maybe the vision of robots walking around the streets isn't too far off in the future after all.
Short blog post and plug this week!
We're excited that one of our co-founders will be speaking at the National CSforALL summit in Utah on October 22! She'll be part of a panel on youth-led innovation and outreach in CS, and will talk about Allgirlithm as well as her community workshops for girls. Although her panel won't be broadcast, you can tune into the livestream at live.csforall.org to hear from amazing plenary speakers like Girl Scouts CEO Sylvia Acevedo, and to learn more about making computer science education accessible to all!
For more information about the summit and a full list of speakers visit: https://summit.csforall.org
Hope you'll tune in to learn about Systems Change in computer science and technology! Happy coding.
It all started with Leonardo da Vinci in the late 1450s when he created a crude blueprint for a self-propelled cart. Throughout history, attempts were made by multiple companies in hopes of creating the world’s first self-driving car. In the 1920s Houdini Radio Control Company and Chandler both tried and partially succeded. In the 1970s the Japanese were able to build upon our knowledge of self-driving cars and created a camera system that captured images and relayed them to a computer. Now we have new safety features like assisted parking and braking systems with some cars being able to park and brake themselves. In order to create a fully autonomous car, we have defined 5 levels of autonomy. Many vehicles are partially automated falling in level 2 but recent advancements from Tesla and Audi fall in level 3: conditional automation where a driver is still required but does not need to navigate or monitor everything. That blueprint has since evolved to form the modern constructs of what is now considered as a self-driving car.
Self-driving cars will revolutionize the transportation industry by creating a variety of benefits. The primary one is road safety. More than 90% of car accidents result from driver behavior and error. If self-driving cars are able to mitigate the amount of error, thousands of lives can be saved annually. Fewer car accidents could reduce the costs of insurance and the amount of expensive medical bills saving users about $4,100 annually. Fewer car crashes will also lead to less congestion on the roads meaning cars would be on the street for decreased periods of time. This would decrease travel times and reduce carbon emissions. Self-driving cars also increase accessibility and independence. Many individuals with disabilities are perfectly capable of being independent and having a self-driving car would be one step in the right direction for them. Another benefit is they would help increase productivity by allowing users to leave the car while the car parks itself, saving users valuable time.
Despite the numerous benefits of self-driving cars, there are also many drawbacks. Some drawbacks include pricing, specifically the sticker price being too expensive, the vulnerability of technology and our data, massive job losses in the transportation sector, low functionality during extreme weather, adherence to unique local laws, and others. In addition to these harms, there are many ethical concerns with self-driving cars. Drivers every day have to make ethical choices and use their best judgment. One example is if there is an animal crossing the street, most drivers would stop to spare the animal’s life. With a self-driving car, this gets more complicated because if the car is programmed to prioritize the passenger above all else, the car might decide to not stop prioritizing the passenger’s time instead. Ethical choices made by drivers were measured using the Moral Machine survey which found that individuals discriminated against others when driving based on race, socioeconomic status, age, gender, and looks. The Moral Machine used variations of the famous thought experiment about the trolley problem to understand and explore different moral decisions. The self-driving car is meant to be better than a human and eliminate preconceived biases. An example of how these ethical concerns can play out in a self-driving car is if a car is programmed to stay closer to the bike lane over the trucks, then more bikers will be killed over passengers prioritizing one group of individuals over another.
Self-driving cars have massive potential but in order for them to truly benefit society, it is imperative we get as many perspectives as we can on the ethical issues they can pose. You can advance the progress we make by contributing to the Moral Machine Experiment linked below. It’s been over half a century since Da Vinci’s humble cart, as we constantly innovate and reiterate but we still have a long way to go.
Learn more about the Moral Machine experiment here: http://moralmachine.mit.edu/ https://www.thoughtco.com/history-of-self-driving-cars-4117191 https://www.fool.com/investing/2019/09/30/what-does-the-future-hold-for-self-driving-cars.aspx
https://www.insidescience.org/news/moral-dilemmas-self-driving-cars https://www.titlemax.com/resources/history-of-the-autonomous-car/ https://www.esurance.com/insights/self-driving-cars-save-money