Image Credit: Pixabay
Artificial intelligence, or AI, has marked a revolutionary turning point within our society. This dynamic and emerging field of technology has enabled us to interact with each other in unimaginable ways. If you looked around, you would come to see just how often our routines intertwine with the possibilities and opportunities of machine learning algorithms, cloud computing platforms, virtual reality, and image processing. With everything ranging from healthcare and environmental sustainability to education and transportation, AI introduces the promise for an efficient and technologically advanced lifestyle.
Take Google Translate, a multilingual machine translation service developed by Google. Whenever we’re in need of a quick translation, we resort to pulling out our phones, recording a voice, using live translation on an image, and watching the magic happen. This is the work of deep neural networks and a method in which computers are programmed to analyze a variety of languages, called natural language processing (NLP). However, with the many prospective benefits of such advanced machinery, comes a darker side. A frightening dilemma soon emerges, commonly referred to as “deep fakes”, which gains the attention of news headlines.
Deepfakes, a form of combining existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network (GAN), introduces us to the dangers and threats of media manipulation. GANs are able to receive photos and videos of a person, typically in extremely large amounts, and are “trained” based off of these inputs. They are then able to generate new images and videos that look nearly identical and indistinguishable from the original content. This new form of image altercation makes us question whether what we see is indeed real, and raises questions concerning the credibility of anything seen on the media such as news sources, online platforms, and politics. Since anyone has the ability to find resources to produce a manipulated video, deep fake technology undoubtedly opens doorways for malicious intent, public shaming, identity theft, and fraud.
The rapid pace at which deep fake production is growing is concerning, considering its capability to influence politics by unauthentically framing and exploiting one’s words and actions. Currently, there are around a whopping 14,678 deepfakes on the internet—and counting, according to CNN. It was also found that individuals and businesses have begun to make custom deepfakes for buyers and sell them for profit. So, what is being done to combat the rising exploitation of deepfake technology?
The Pentagon, through partnership with the Defense Advanced Research Projects Agency (DARPA), is working to hinder the spread of deepfakes with researchers and universities by finding ways to train computers to identify them. Additionally, organizations such as Deeptrace aim to re-establish trust in visual media by detecting and monitoring deepfakes using deep learning.
Sadly, deepfakes may be a problem that is getting too out of hand for the work of companies and organizations. In an article by Digital Trends, Luke Dormehl elaborates on why tech companies are ill-equipped to tackle this problem. He says that that deepfake technology is becoming increasingly better, and the inconsistencies present within earlier deepfakes have now been fixed. With the rate at which these visual reproductions are being created, it is nearly impossible for researchers to keep up. According to the Washington Post, Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley, states that “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”
Larger companies are making an effort to combat this such as Facebook, who announced a $10 million ‘deepfakes detection challenge’ according to VICE. The challenge is expected to take place in December, where Facebook will release a data set of faces and videos for the development of methods and technologies that can detect an algorithmically generated video.
As of now, there is not much we can do as individuals to know whether the next audio we hear or the next video we see is 100% unaltered. But, educating and spreading awareness of the threats and dangers associated with such a rapid and promising period of technological growth is always important to live by.
Think of the term “Artificial Intelligence”. What comes to mind? A perfect futuristic world with self-driving cars, towering skyscrapers, and robots? Giant robots that are planning on taking over the world. Whatever it may be, many people think of a world with Artificial Intelligence as one that seems far off in the future, perhaps only achievable by the year 2500. However, there are already applications of Artificial Intelligence present in our current lives, whether we realize it or not.
Before we get into its applications, what is Artificial Intelligence? According to Techopedia, it is an area of Computer Science that emphasizes the creation of intelligent machines that work and react like humans. In modeling the human mind, Artificial Intelligence uses neural networks to train machines to think and act rationally. The initial idea of Artificial Intelligence was proposed by Alan Turing, a mathematician that helped break the Nazi encryption machine Enigma during World War II. Stemming from the question: “Can machines think?”, he wrote a paper called “Computing Machinery and Intelligence” (1950) which laid the foundation for what Artificial Intelligence was to become.
So how is AI used in our lives? One simple example would be on Gmail, where there are tabs at the top that filter your email into Primary, Social, and Promotions. Gmail uses AI to filter out your emails and to make sure that the emails that you actually see are authentic.
A similar email feature that is used in other email services is the spam filter where algorithms are in place to detect and filter out any spam mail.
Whenever we are going online, we are unconsciously interacting with AI by feeding in our data. When we go on YouTube and click on the videos that we want to watch, that information is plugged into YouTube’s algorithm, which is used to recommend videos to us. Similarly, when we search for a product online, Google’s algorithm learns from that information and generates ads on other websites for products that it thinks that we are interested in. According to Forbes, artificial intelligence has the potential to offer $15.7 trillion to the global economy by 2030 through sales generated by specific pop-up ads.
The “Internet of Things” (IoT), a network of interconnected devices that exchange data, is full of information. Artificial intelligence can go through all of that data and learn new information to make our lives easier. One practical application of this today is Google Maps, which approximates the time for you to get to a distance and how long it will take you to get there. By having the location data to your smartphone the app can compare the location of your device from one point in time to another, calculating how fast or slow you are traveling. Thus, it can determine the pace of traffic in real-time. Combined with incident reports, it can better predict how long it will take you to get from Point A to Point B.
Despite not realizing it at times, we are always interacting with Artificial Intelligence when we go online. With advanced training of neural networks through larger quantities of data, computer scientists will be able to apply AI to more aspects of our lives and make our lives easier. Maybe the vision of robots walking around the streets isn't too far off in the future after all.
Short blog post and plug this week!
We're excited that one of our co-founders will be speaking at the National CSforALL summit in Utah on October 22! She'll be part of a panel on youth-led innovation and outreach in CS, and will talk about Allgirlithm as well as her community workshops for girls. Although her panel won't be broadcast, you can tune into the livestream at live.csforall.org to hear from amazing plenary speakers like Girl Scouts CEO Sylvia Acevedo, and to learn more about making computer science education accessible to all!
For more information about the summit and a full list of speakers visit: https://summit.csforall.org
Hope you'll tune in to learn about Systems Change in computer science and technology! Happy coding.
It all started with Leonardo da Vinci in the late 1450s when he created a crude blueprint for a self-propelled cart. Throughout history, attempts were made by multiple companies in hopes of creating the world’s first self-driving car. In the 1920s Houdini Radio Control Company and Chandler both tried and partially succeded. In the 1970s the Japanese were able to build upon our knowledge of self-driving cars and created a camera system that captured images and relayed them to a computer. Now we have new safety features like assisted parking and braking systems with some cars being able to park and brake themselves. In order to create a fully autonomous car, we have defined 5 levels of autonomy. Many vehicles are partially automated falling in level 2 but recent advancements from Tesla and Audi fall in level 3: conditional automation where a driver is still required but does not need to navigate or monitor everything. That blueprint has since evolved to form the modern constructs of what is now considered as a self-driving car.
Self-driving cars will revolutionize the transportation industry by creating a variety of benefits. The primary one is road safety. More than 90% of car accidents result from driver behavior and error. If self-driving cars are able to mitigate the amount of error, thousands of lives can be saved annually. Fewer car accidents could reduce the costs of insurance and the amount of expensive medical bills saving users about $4,100 annually. Fewer car crashes will also lead to less congestion on the roads meaning cars would be on the street for decreased periods of time. This would decrease travel times and reduce carbon emissions. Self-driving cars also increase accessibility and independence. Many individuals with disabilities are perfectly capable of being independent and having a self-driving car would be one step in the right direction for them. Another benefit is they would help increase productivity by allowing users to leave the car while the car parks itself, saving users valuable time.
Despite the numerous benefits of self-driving cars, there are also many drawbacks. Some drawbacks include pricing, specifically the sticker price being too expensive, the vulnerability of technology and our data, massive job losses in the transportation sector, low functionality during extreme weather, adherence to unique local laws, and others. In addition to these harms, there are many ethical concerns with self-driving cars. Drivers every day have to make ethical choices and use their best judgment. One example is if there is an animal crossing the street, most drivers would stop to spare the animal’s life. With a self-driving car, this gets more complicated because if the car is programmed to prioritize the passenger above all else, the car might decide to not stop prioritizing the passenger’s time instead. Ethical choices made by drivers were measured using the Moral Machine survey which found that individuals discriminated against others when driving based on race, socioeconomic status, age, gender, and looks. The Moral Machine used variations of the famous thought experiment about the trolley problem to understand and explore different moral decisions. The self-driving car is meant to be better than a human and eliminate preconceived biases. An example of how these ethical concerns can play out in a self-driving car is if a car is programmed to stay closer to the bike lane over the trucks, then more bikers will be killed over passengers prioritizing one group of individuals over another.
Self-driving cars have massive potential but in order for them to truly benefit society, it is imperative we get as many perspectives as we can on the ethical issues they can pose. You can advance the progress we make by contributing to the Moral Machine Experiment linked below. It’s been over half a century since Da Vinci’s humble cart, as we constantly innovate and reiterate but we still have a long way to go.
Learn more about the Moral Machine experiment here: http://moralmachine.mit.edu/ https://www.thoughtco.com/history-of-self-driving-cars-4117191 https://www.fool.com/investing/2019/09/30/what-does-the-future-hold-for-self-driving-cars.aspx
https://www.insidescience.org/news/moral-dilemmas-self-driving-cars https://www.titlemax.com/resources/history-of-the-autonomous-car/ https://www.esurance.com/insights/self-driving-cars-save-money
We're entering the T-Mobile Changemaker Challenge, and we need your help! We've spent a lot of time on our submission, and your vote could help show the importance of our work. If you've benefited from or been impacted by our blog, please show your support! It only takes a few minutes to make an account, like, and comment on our submission. Plus, by making an account, you'll get notifications about other cool changemaking opportunities in th efuture!
Your votes and comments would help us continue to expand our resources, club curriculum, and blog, and further our mission of making tech education accessible to everyone. We really appreciate your support!
Here's the link if you're interested:
On a side note, have you heard of the Everyone a Changemaker movement? Allgirlithm is proud to promote changemaking opportuntiies to youth around the world in partnership with Ashoka. Read more here:
Image Credit: Mckinsey
Hey Allgirlithm Readers!
How's the school year going for you so far?
We found some new cool AI for Social Good resources. Hop over to our Resources page for the full list... In the meantime, check out these highlights! Infographics are a great way to convey information and large amounts of data, especially to audiences who don't have a lot of background knowledge on your topic. They're also compelling when telling a story. Share an infographic about AI with us, or a story of using AI for social impact, at firstname.lastname@example.org to be featured on our blog!
“Image recognition” is the training of computers to recognize objects in images. When you think of image recognition, do you think of hard-to-use software running complicated algorithms with thousands of lines of code? I know I do. But in reality, open-source datasets and free apps have made it easier than ever to dive right into image recognition. Here are a few ideas for image recognition-related projects and activities you can try!
1. Build an image recognition iOS app.
Xcode (Apple’s free development environment for creating iOS, macOS, watchOS, and tvOS apps) introduced CreateML in 2018. The framework was designed to allow Apple developers to easily build machine learning models for use in their apps. Although Xcode and its CoreML framework are capable of integrating ML models built using more specialized software, CreateML’s greatest asset is its native, easy-to-use, drag-and-drop interface which trains, tests, and selects appropriate ML classifiers automatically.
The basic steps for building an image recognition iOS app are:
2. Play an image recognition game.
There are a few fantastic image recognition-related web apps. One of them is Google’s Quick, Draw! game, which gives you something to draw, then guesses what you’re drawing. Another is Google’s Autodraw experiment, in which the computer attempts to complete your drawing.
Another of Google’s AI experiments, Handwriting with a Neural Net, generates strokes matching the style of your handwriting samples, while Cartoonify creates cartoons out of your drawings.
One thing to keep in mind when playing these games is you’re providing your data (your doodles) to Google to improve their software. This may or may not be something you’d want to do; in any similar situation, it’s always a good idea to consider both privacy issues and possible issues with the research and future software you may be enabling (e.g. facial recognition technology).
3. Conduct image recognition research.
Another idea for image recognition-related projects you can try is research, in which you identify a problem, develop a hypothesis, follow a procedure (collect and analyze data), and come to a conclusion regarding the problem you posed previously. Research projects typically last several months.
This may seem a little more daunting than the other two, but one thing you can try is brainstorming ideas for research you might be interested in. There are lots of fantastic datasets out there; I’ve listed a few below that you might find helpful for developing your ideas!
Skin (e.g. moles and lesions): https://isic-archive.com
Handwritten digits: http://yann.lecun.com/exdb/mnist/
Remember: these are just a few examples of well-constructed datasets you can use for your image recognition research. If you’re interested in any topics that aren’t listed above, you can check out a number of sites dedicated to machine learning research and datasets--Kaggle, for example, has a huge inventory of information, datasets, and even competitions dedicated to machine learning.
Which of these image recognition activities do you plan to try? Let us know in the comments! And in the meantime, happy coding!
Photo credit: Artem Kniaz
When you think of technology as an abstract concept, it’s common to associate the term with extraterrestrial spaceships or sentient machines. Technology, however, actually involves the progression of humanity through the development of certain skills, products, et cetera. We tend to restrict our perception of technology to utopian societies or Orwellian dystopias because that is the path that we assume that human development will take us. Sometimes, however, technology can be used to look into the past, which can reveal much more about the human race than we previously predicted.
Archaeological sites, buried under sediment and time, are one of the pieces of history that researchers are looking into. Through her platform GlobalXplorer, Sarah Parcak has revolutionized the way that we look at studying the past. GlobalXplorer accumulates millions of satellite images online so that viewers can peruse the images and identify archaeological sites that may have previously gone unnoticed. According to National Geographic, the project has contributed to the identification of key archaeological structures in Peru and will soon be moved to India, where its past successes are sure to provide key insights. A core concept of Parcak’s model is that it involves volunteers to a great extent. Rather than limiting the ability to make contributions to researchers, GlobalXplorer has provided a platform for people across the globe to piece together the scattered remains of human history.
In late June, headlines in the technology industry stated the conclusions of a recently published study: this century will not see women publishing the equivalent amount of computer science research as men, based on the most optimistic analysis of past trends. With an apparent increase in the representation of female scientists and coding organizations that aid underrepresented groups, it was disheartening to hear that there are still decades before parity is projected to be achieved. It appears as if this trend has been occurring for decades - women continue to face barriers to entry in technology fields, even with the advent of new products and job opportunities, but any hope for a clear solution is shrouded in years of stereotypes and deeply rooted obstacles in industry and academia alike. With both a gender gap and wage gap in STEM fields, how can incoming computer scientists counter a system that seems to be built against them?
On a social scale, it appears that young women exposed to computer science may shy away from the field based on the stereotypes around it. A study conducted through Microsoft found that 91% of girls and 80% of young women would describe themselves as creative, which conflicts with the traditional description of programming as a purely logical occupation. Placing all of the different aspects of STEM under a specific label inhibits people from making connections with the other parts of the field that may not all fall in that category. Many students may not explore STEM fields because of the traditional descriptions of tech, and without having introductory courses or direct experiences with the field, they may never be able to ignite their interest in STEM. According to BBC, a report from the Council for the Curriculum, Examinations and Assessment found that girls were often uncomfortable with studying computer-related disciplines and felt a pressure to perform better if they did end up pursuing computing.
By Anne Li
Last month, I had the opportunity to attend Apple’s WorldWide Developer’s Conference as one of 350 scholarship winners from around the world. In this post, I’ll go over my entire experience – from applying back in March to attending in June.
I’ve known about WWDC for several years now, and this year was my second time applying for the scholarship – so I have a general idea of what both a winning submission and a not-winning submission look like. Before I get into that, though, let’s cover the basics:
- I’m honestly not sure about this, but if I remember correctly, the application usually opens the second week of March
- The submission window is small – I think around 10 days from when the application portal opens to when it closes
- The application requires applicants to upload a Swift or Xcode Playground along with responses to several essay questions
I was rejected the first time I applied, which was last year. I submitted an Xcode Playground that displayed a graph of a Taylor polynomial for the sine function (link to Github repo). I thought it was really cool at the time, but in retrospect I think it was pretty lame (probably because I don’t remember how to find Taylor polynomials anymore). The playground also gave users the option of changing the center and degree of the polynomial in order to see how those factor into its overall shape.
This year, I wanted to do something involving the algorithms I’d encountered in competitive programming, so I submitted an Xcode Playground that introduced users to breadth-first search and depth-first search. I tried to make it a lot more creative this year – incorporating mazes as a way of teaching the graph-traversal algorithms, and making it more of a game. I also drew some cute illustrations (in MS Paint lol). You can check out my playground here.
Apple paid for my ticket to the conference, as well as a week of lodging, but I was responsible for transportation to and from the conference. Unrelated, but my dad decided to come along as well, though I don’t remember why. Anyways, he mostly hung out in Cupertino and visited random places. I think he also ate at every Chinese restaurant in the area he was staying.
Here’s a day-by-day summary of the conference:
- Orientations, check-in, scholarship winners’ kickoff, etc.
- I got to meet my fantastic counselor, roommate, and some other scholarship winners!
- We also got our badges and a bunch of random stuff from Apple (pins, jacket, etc.)
- Keynote, Platform State of the Union, Apple Design Awards
- One of the most important events of WWDC; Tim Cook (Apple CEO) and other Apple engineers and developers spoke during the Keynote
- A lot of announcements, including Dark Mode for iOS, SwiftUI, Mac Pro, etc.
- I did notice that women were fairly well-represented in the Keynote – several of the speakers and presenters were women. I’m not sure how the numbers compare to previous years, but just a casual observation
- I really enjoyed the Apple Design Awards! One of the winners was a ultrasound app (ButterflyIQ) which I found really cool
- Women@WWDC Breakfast, NCWIT Roundtable Discussion
- The breakfast included a panel of women who’d won scholarships this year, as well as alumni of Apple’s Entrepreneurship Camp. One of our friends was selected for the panel, so we went to watch and support her and the others
- The roundtable discussion was later in the day – we got to talk to Apple Senior Director of WorldWide Developer Marketing Esther Hare and four Entrepreneurship Camp alumni (link to post on NCWIT blog about the experience)
- Mostly just technology labs
- I think almost any WWDC-related blog you’ll find on the web will urge you to attend labs rather than sessions, since sessions are available online after the conference – and I have to agree. Getting one-on-one advice from Apple engineers on any projects you might be working on is infinitely more helpful
- More labs
- One of the best ones I attended was the UI Design Lab; you’re paired with an Apple designer who looks at your stuff and provides feedback on the design
- I don’t have much to say on this, because I had to go to my dad’s Airbnb to get a phone charger, and then he offered to take me to a random Chinese restaurant™ with really good noodles he’d tried earlier in the week and I will exchange half of my soul for good noodles. But I heard Craig Federighi was in the crowd, so I’m still slightly jealous. Also I didn’t get to say goodbye to some friends who were leaving early :’(
- I didn’t go to any sessions or labs on Friday, hehe
- The scholarship lounge is really nice
- The food is okay
- San Jose has a lot of boba shops, including two within a couple minutes’ walking distance from the convention center (I think Gongcha and Breaktime)
- A lot of walking, especially if you decide to explore the city and/or get food outside of the conference and/or get boba
- If you end up going, be sure to check out some of the events taking place at AltConf! AltConf is free and takes place in the Marriott directly adjacent to the convention center. I went with a couple friends to a really great talk by Mayuko Inoue
- I tried to ride the VTA once and got on the wrong one. Apparently I still haven’t learned how to read numbers. I also didn’t pay attention, so didn’t realize I was on the wrong one until ~20 minutes into the trip. In conclusion, don’t ride the VTA unless you’re capable of reading numbers.
If I could go again, I would:
- Have more questions prepared for the labs
- For a couple of the labs, I kind of just wandered by and thought, “Oh, this might be helpful!” And then I couldn’t think of anything to ask.
- Attend more of the breakfasts
- In addition to Women@WWDC, there were two other breakfasts: Black@WWDC and Latinx@WWDC. The breakfast events are chances to hear from lots of different perspectives – everyone’s journey in tech is different, and recognizing that is crucial for anyone trying to promote inclusivity in tech
- Talk to more people
- I got to meet a lot of brilliant people doing brilliant things, but I do wish I’d been a bit more outgoing – though this is something I’m still working on. Sometimes I worry about being the least competent person at tech events, but the people I met at WWDC were extremely friendly and supportive regardless of accomplishment, experience, etc.
How to win a WWDC scholarship
I don’t think there’s a formulaic or clear-cut method to win a scholarship. Sorry if you just read the last ~1100 words just to read this :( . But I do have one piece of advice – start early! Like I mentioned earlier, the submission period is very short, so it helps to have an idea of what you’re going to do before the submission portal opens.
This post started off formally enough and slowly descended into anarchy. I am so sorry. But thanks for reading, and best of luck if you’re applying for a WWDC scholarship in the future!