Photo Credit: smartify.org
AI is being integrated into all aspects of our lives. One place where it has tremendous applications is in museums. Museums are full of huge amounts of data that hold many opportunities for AI.
One way that AI can help is by sorting collections, as “more than 90 percent of (enterprise) data is unstructured, human-generated and sourced from various disparate entities” (IDC, 2015). Using image recognition, pattern recognition, machine vision and sentiment analysis (analysing the emotions conveyed through text or faces recognised), museums can find interesting and new ways to quickly sort through their collections. Here are some museums that have used sentiment analysis to sort through their data:
Another exciting avenue for museums to explore is the use of smartphones to easily and instantaneously recognise works and access additional information about them. The app Smartify, which launched in 30 museums this year including The Met in New York and the National Gallery in London, does exactly this. It has been described as the ‘Shazam of the art world’, as it uses image recognition to scan and identify art works. It then provides the user with information about the work as well as interviews with the artist and other audio-visual information pertaining to the work. Currently, the app does not recognise images that are not already stored on its database, but the company is working towards changing this.
Additionally, some museums, like the Anne Frank House in Amsterdam, have launched Messenger bots to interact with visitors. The one in the Anne Frank House tells users about the history of the museum and Anne Frank’s past. The SFMOMA also uses a chatbot that connects to visitors through texts. It allows users to send in keywords and in turn sends back answers with pictures of works at the museum including the title, artist and year. These chatbots do make mistakes occasionally, but showcase the great ways in which museums can use AI to make their works more accessible.
These are just a few of the millions of ways in which museums can innovatively use AI to their benefit. AI can help keep track of and sort through museums’ vast amounts of data, can make this information more reachable to the public, and can allow visitors to have more insightful and enjoyable museum visits.
Museums and the web
Photo credit: How-To Geek
All we hear about AI these days is how it has the potential to be used in ground-breaking technology and will drastically change various aspects in an average person’s life. However, AI is already making a dent in the everyday aspects of human life.
For instance, a line of Google Nest products are applying AI in everyday items. CNET gives examples of the Nest Hello smart doorbell, the Nest Secure alarm system, the Nest Yale Lock and the Nest Cam IQ Outdoor security camera. The alarm system and security camera both use AI to help with facial recognition along with motion sensing. All of these applications of AI are helping people make their homes “smarter.”
Another Google Nest item is the Nest Learning Thermostat. The NEST website explains that the thermostat uses AI to “learn what temperature you like and build a schedule around yours.” It recognizes daily patterns and begins to learn a routine, which helps with saving energy as well as convenience.
AI has the potential to change life in many little ways, as well as leading technology into great breakthroughs.
Photo Credit: Business Insider
DeepArt.io is a website that uses deep neural networks to identify and combine stylistic elements of two separate images, a technique known as style transfer. But it’s not traditional “artificial intelligence”: no coding experience is required.
The program relies on a neural algorithm, developed by Leon Gatys and colleagues at the University of Tübingen in 2015. This has been used in photo filters on Facebook and Prisma, as well as on moving image. Kristen Stewart used style transfer in her directorial short film debut Come Swim to redraw a brief dream sequence.
In recent years, these kinds of programs have proliferated, using different techniques to create AI-assisted works which are both sophisticated and beautiful. In fact, a study says that AI-generated art now looks more convincingly human than work at Art Basel.
For good or bad, the consequences could transform mainstream art production, consumption, and artists.
First, a little history. The earliest known generative computergraphik, created around 1960 by Georg Nees, the German “father of computer art”, consisted mainly of black-and-white drawings of shapes. The first computer-generated music piece, Lejaren Hiller and Leonard Isaacson’s Illiac Suite for String Quartet, came in 1957. Both experiments were aimed at academic audiences, and not very “artistic.”
We’ve come a long way since then. Deepjazz, created by Princeton University Ph.D. student Ji-Sung Kim, used neural networks to detect jazz musical patterns and generate new songs.
Nvidia recently published a paper documenting how researchers, with incredibly convincing results, generate life-like images. The algorithm takes images of a winter street and predicts what it would look like during summer. Gene Kogan, a generative artist and author of Machine Learning for Artists has used similar methods to make realistic place images.
Cornell University and Adobe researchers have also been working on a sophisticated version of style transfer for photos. The process they’re developing can even use the sunset lighting of one photo and apply it to a daytime photo of another location. Google, too, has been working on “supercharging style transfer.” Researchers developed a way to combine multiple styles, mixing them like paints.
These may change how we value artists, too. We’re likely to see a proliferation of algorithmic art in mainstream culture. These tools will take some of the burden off of artists, but may lead to fewer job opportunities in the digital economy.
Taobao, a Chinese shopping website, created banner ads for its mega-shopping Singles’ Holiday by training algorithms on design patterns of successful ads Airbnb also showed off a tool which uses algorithmic art techniques to convert sketches into fully designed and functional prototypes.
The vibrant world of artistic potential that’s opened up by algorithms will be darkened by the potential for artists to lose control.
Photo credit: Pixabay
It hasn't been two years since AlphaGo beat world champion Lee Sedol in Go, but Google's DeepMind already launched a new AI program to take its place.
On December of 2017, Alpha Zero single handedly defeated a world class chess engine, Stockfish, in only 4 hours. In fact, it had no previous experience with chess besides learning the basic rules, but the results were incredible: the AI went undefeated, winning 28 games and drawing the rest in an 100 game matchup. After this match, it went on to beat its former self AlphaGo in Go as well as Elmo in shogi.
With this breakthrough, experts were able to discover more about the thought process of a machine. According to Demis Hassabis, the AI "doesn't play like a human, and it doesn't play like a program . . . It plays in a third, almost alien, way." As he analyzed the games of Alpha Zero, he noticed it played some outlandish yet positionally profound moves. Hassabis offers an explanation for this strange behavior. Rather than reinforcement learning (letting the AI learn from example games), Alpha Zero was taught solely by playing games against itself without any human input. DeepMind also says it takes on an "arguably more human-like approach", one that involves more evaluation and planning instead of calculating lengthy variations.
Ever since 1997 when DeepBlue beat the world chess champion Gary Kasparov, computers have revolutionized the game of chess. Now powerful forms of machine learning like AlphaGo are making a drastic impact in the field of board games. Surely enough, it keeps us wondering: who will defeat Alpha Zero?
Google Code-in and the Google Code-in logo are trademarks of Google Inc.
Are you interested in working on projects with open-source organizations along with peers from around the world? Getting first-hand experience in the world of project development? Interested in coding, quality control, documentation, or outreach? Just getting into the world of programming?
The Google Code-In, which opened for registration on November 28th and will run until January 17th, is an annual event allowing participants to do just that. Pre-university students of all skill levels, ages 13-17, are invited to participate. According to their webpage, over 4500 students from 99 countries have completed work in the contest since 2010. Google partners with a number of open-source organizations (this year’s bunch includes Ubuntu and JBoss), giving participants the opportunity to claim tasks and work with mentors to complete assignments.
Assignments range from developing new code for an application or webpage, to installing software and documenting the process, to designing company laptop stickers and t-shirts. The Google Code-In has tasks for everyone and is a great way to be introduced to the world of programming. Additionally, participants get to win prizes ranging from t-shirts to a trip to Google HQ! If you’re interested, we encourage you to sign up at https://codein.withgoogle.com/.
Photo Credit: Gigaom
AI has been making great advances in social media. On November 27th, Guy Rosen, VP of Facebook Product Management, announced in a blog post that Facebook is using AI to help identify suicidal users and connect them to help.
This tool has been in use in the US for months and will now be implemented in other countries as well. Rosen wrote that in the last month alone, Facebook ‘worked with first responders on over 100 wellness checks based on reports’ thanks to this technology.
This tech uses AI for pattern recognition to ‘help accelerate the most concerning reports’ and inform local authorities, writes Rosen. Pattern recognition helps Facebook flag posts and live streams through which users may be expressing suicidal thoughts. It also searches for comments like, ‘Are you ok?’ and ‘Can I help?’ which can be strong indicators of someone needing support. It then prioritizes the posts and sends more pressing ones to be reviewed first.
Snapchat, too, has recently unveiled AI image-recognition technology in its latest update. It recognises objects in pictures and then offers image-recognition filters which are tailor made to match the objects in the picture. For example, if you take a picture with food, Snapchat will offer filters with words like, ‘get in my belly’ and ‘eatin’ good’.
This is not the first time that the company has incorporated object recognition in its app. Snapchat already allows you to search for certain objects, places and events in ‘stories’. For example, if you search for ‘beach’, you will get snaps of people at beaches, and if you search for ‘football’ you will find snaps of people at football games.
This is just the beginning of AI being incorporated into social media and eventually all aspects of our daily lives.
Photo Credit: Daily Mail
Researchers at the University of California, San Diego, and Adobe have recently created a way for AI to both learn a person’s style and create images of items that match the style. The system could potentially allow retailers to create personalized clothing, or help predict fashion trends.
The two algorithms used are a convolutional neural network (CNN) and a generative adversarial network (GAN). The two networks improve the results and can create multiple item images for each user. There’s still a few obstacles to these AI-generated textiles hitting the market, however. For example, researchers need to turn two-dimensional computer images into 3-D images used to produce an actual piece of clothing. And of course, fashion sense requires knowing which items pair well together.
Amazon has been working on using AI to spot fashion trends, and Alibaba, a Chinese retail giant, has introduced FashionAI, which recommends items based on what shoppers brought into the dressing room.
Vue.ai is a fashion AI startup that recently revealed a method for creating fake fashion models. Last fall, Burberry launched a Facebook Messenger bot during London Fashion Week, which offered glimpses of the new collection and shared trivia, as well as a live buying option. HighSnobiety is a website covering streetwear trends, which also launched a Sneaker Bot on Facebook Messenger, which quickly conveyed information and news from different brands.
This is just the tip of the iceberg when it comes to AI applications in fashion. It’s an exciting field, with many high-profile clients and players.
MIT Tech Review
Several companies from Silicon Valley are taking advantage of AI's ability to accurately recognize images in order to benefit consumer's health decisions. For instance, Habit, founded by Nail Grimmer, uses a combination of genetics and machine learning to help personalize the user's diet, the startup Passio uses AI to give nutritional advice, and the New York based company Edamam implements Recipe Analysis API to provide nutritional information to the user.
Not only will artificial intelligence assist consumers, but they will also bring advantages to producers. In the future, AI could be able to help recognize agricultural diseases (researchers at Cornell already trained their own AI to identify brown leaf spot disease on cassava leaves with a 98% accuracy). Other applications of AI in the food industry include reducing the use of herbicides and other harmful chemicals through precision weeding or simply aiding in the harvest of crops.
But why is AI so good at decision making? A study done by Stanford reported on by FoodTanks concluded that the artificial neural networks (analogous to the brain's neural networks) are trained with "huge data sets and large-scale computing (deep learning), boosting data-driven solutions for improving decision making." To learn more about the difference in deep learning and machine learning, feel free to check out this article by Forbes.
Photo credit: The Medical Futurist
AI excels in many areas, however, one place where AI currently falls short is emotion. AI is unable to detect and replicate human emotions, something that many people are concerned about. However, this may change in the future.
There are autonomous, relational, and conversational devices, but so far, there has not been a device that can detect emotion. Currently, an area of AI (emotion AI) is creating algorithms that can detect basic human emotions. Some challenges they face include how to train multi-modal systems and how to get data on less frequent emotions. Nonetheless, emotion AI is progressing quickly, and the MIT Technology Review predicts that technology may become emotion-aware within the next five years.
Forbes ties the benefits of emotionally-aware devices into chatbots, explaining how devices would be able to better interact with humans if they were aware of emotion. Emotionally intelligent chatbots would also be much more consumer-friendly. Additionally, Microsoft states that in order for AI to be a positive force, it will need empathy, since empathy is what will truly allow AI to solve for people-problems.
In order for AI to be able to truly interact at the human level, they first need to be aware of empathy with compassionate intelligence -- the ability to act with compassion.
MIT Technology Review
Photo credit: USACO
Are you interested in spending hours hunched over a computer, debugging until 2 AM?
It's not as bad as it sounds, we promise...
The United States of America Computing Olympiad (USACO, supposedly pronounced "you-sah-co") is a multi-round competition. During each round, competitors solve various programming problems, ranging in difficulty based on the competitor's level. There are 4 levels: bronze, silver, gold, and platinum.
Practice for the USACO using their online training pages, or multiple other competitive programming websites like CodeForces. You'd have the chance to be selected as one of a small group of students to attend the summer training camp. Those who perform well at camp are chosen to represent the United States at the International Olympiad in Informatics (IOI). The IOI 2018 will be held in Japan.
The first round of USACO 2017-2018 will be held mid-December.