Photo Credit: Daily Mail
Researchers at the University of California, San Diego, and Adobe have recently created a way for AI to both learn a person’s style and create images of items that match the style. The system could potentially allow retailers to create personalized clothing, or help predict fashion trends.
The two algorithms used are a convolutional neural network (CNN) and a generative adversarial network (GAN). The two networks improve the results and can create multiple item images for each user. There’s still a few obstacles to these AI-generated textiles hitting the market, however. For example, researchers need to turn two-dimensional computer images into 3-D images used to produce an actual piece of clothing. And of course, fashion sense requires knowing which items pair well together.
Amazon has been working on using AI to spot fashion trends, and Alibaba, a Chinese retail giant, has introduced FashionAI, which recommends items based on what shoppers brought into the dressing room.
Vue.ai is a fashion AI startup that recently revealed a method for creating fake fashion models. Last fall, Burberry launched a Facebook Messenger bot during London Fashion Week, which offered glimpses of the new collection and shared trivia, as well as a live buying option. HighSnobiety is a website covering streetwear trends, which also launched a Sneaker Bot on Facebook Messenger, which quickly conveyed information and news from different brands.
This is just the tip of the iceberg when it comes to AI applications in fashion. It’s an exciting field, with many high-profile clients and players.
MIT Tech Review
Several companies from Silicon Valley are taking advantage of AI's ability to accurately recognize images in order to benefit consumer's health decisions. For instance, Habit, founded by Nail Grimmer, uses a combination of genetics and machine learning to help personalize the user's diet, the startup Passio uses AI to give nutritional advice, and the New York based company Edamam implements Recipe Analysis API to provide nutritional information to the user.
Not only will artificial intelligence assist consumers, but they will also bring advantages to producers. In the future, AI could be able to help recognize agricultural diseases (researchers at Cornell already trained their own AI to identify brown leaf spot disease on cassava leaves with a 98% accuracy). Other applications of AI in the food industry include reducing the use of herbicides and other harmful chemicals through precision weeding or simply aiding in the harvest of crops.
But why is AI so good at decision making? A study done by Stanford reported on by FoodTanks concluded that the artificial neural networks (analogous to the brain's neural networks) are trained with "huge data sets and large-scale computing (deep learning), boosting data-driven solutions for improving decision making." To learn more about the difference in deep learning and machine learning, feel free to check out this article by Forbes.
Photo credit: The Medical Futurist
AI excels in many areas, however, one place where AI currently falls short is emotion. AI is unable to detect and replicate human emotions, something that many people are concerned about. However, this may change in the future.
There are autonomous, relational, and conversational devices, but so far, there has not been a device that can detect emotion. Currently, an area of AI (emotion AI) is creating algorithms that can detect basic human emotions. Some challenges they face include how to train multi-modal systems and how to get data on less frequent emotions. Nonetheless, emotion AI is progressing quickly, and the MIT Technology Review predicts that technology may become emotion-aware within the next five years.
Forbes ties the benefits of emotionally-aware devices into chatbots, explaining how devices would be able to better interact with humans if they were aware of emotion. Emotionally intelligent chatbots would also be much more consumer-friendly. Additionally, Microsoft states that in order for AI to be a positive force, it will need empathy, since empathy is what will truly allow AI to solve for people-problems.
In order for AI to be able to truly interact at the human level, they first need to be aware of empathy with compassionate intelligence -- the ability to act with compassion.
MIT Technology Review
Photo credit: USACO
Are you interested in spending hours hunched over a computer, debugging until 2 AM?
It's not as bad as it sounds, we promise...
The United States of America Computing Olympiad (USACO, supposedly pronounced "you-sah-co") is a multi-round competition. During each round, competitors solve various programming problems, ranging in difficulty based on the competitor's level. There are 4 levels: bronze, silver, gold, and platinum.
Practice for the USACO using their online training pages, or multiple other competitive programming websites like CodeForces. You'd have the chance to be selected as one of a small group of students to attend the summer training camp. Those who perform well at camp are chosen to represent the United States at the International Olympiad in Informatics (IOI). The IOI 2018 will be held in Japan.
The first round of USACO 2017-2018 will be held mid-December.
Pictured, left to right, are: Manisha Bahl, director of the Massachusetts General Hospital Breast Imaging Fellowship Program; MIT Professor Regina Barzilay (center); and Constance Lehman, professor at Harvard Medical School and chief of the Breast Imaging Division at MGH’s Department of Radiology.
Image: Jason Dorfman/CSAIL
There are over 200,000 cases of breast cancer every year in the United States and 40,000 women die every year due to this. One of the best and most common ways to diagnose breast cancer is through mammograms. However, a drawback of using a mammogram is that they are still imperfect and result in a great many false positives which lead to unnecessary surgeries and biopsies. A cause of these false positives are high risk lesions that appear suspicious on mammograms and are often removed through surgeries. Nonetheless, 90% of these lesions are benign, meaning that thousands of women must go through painful, scarring and unnecessary surgeries.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory, Massachusetts General Hospital and Harvard Medical School turned to artificial intelligence for the answer. The model is trained on more than 600 existing high risk lesions and looks for patterns within family history, demographics, genetics, etc. By using the “random-forest classifier” the model diagnosed 97% of cancers. As the name suggests, a random-forest is made up of multiple decision trees. A decision tree is a predictive model which goes from observations about an item, or the branches, to conclusions about this item, represented by the leaves. Each of the decision trees come to a conclusion and vote on what the data set could be. Majority rules.
Researchers hope that this model can be incorporated into clinical practice in the next year. The team, including Regina Barzilay (MIT’s Delta Electronics Professor of Electrical Engineering and Computer Science), Constance Lehman (professor at Harvard Medical School and chief of the Breast Imaging Division at MGH’s Department of Radiology) and Manisha Bahl of MGH. Along with CSAIL graduate students Nicholas Locascio, Adam Yedidia, and Lili Yu, they published an article in the medical journal Radiology.