Image Credit: Interesting Engineering
By Angela Choi
What first comes to mind when you consider how we use AI in our daily lives? Most of us think of Alexa, self-driving cars, or even our own Netflix recommendations. But AI has so much more to offer to countless industries, as it has the potential to revolutionize the health care sector forever.
AI-powered personal healthcare apps enable smart and efficient work processes that can improve patient experience and provide better services. These apps continuously collect data and check the vitals of the user, achieving a similar purpose to that of other virtual assistants like wearables and discrete monitors. The data collected, which is stored locally or online, can then be retrieved by medical professionals as a medical report.
For instance, WebMD, one of the most well-known symptom-checkers that millions of people use every day, built an app that uses machine learning to provide trusted information that has been reviewed by qualified physicians. Additional features of the app include medication reminders, fitness tracking, updates on the latest news in healthcare, and a directory of local physicians to help users arrange appointments.
Ada, which was developed in 2016 to relieve pressure on healthcare professionals, is another medical app that is now used in 140 countries to provide care to patients at home. With the app’s instant messaging design, Ada asks simple, relevant questions about the user’s symptoms to gain a better understanding of their health. Then, Ada determines the potential medical issue by pulling data from its virtual medical library, which stores data from thousands of similar cases. Through classification, clustering, and information extraction, this AI-powered doctor can offer advice to the user on what to do next, whether it be self-medication or seeking assistance from a nearby health professional.
Not only do these AI-based apps make healthcare more accessible to all, but they also can help address the issue of lack of expertise in certain areas of medicine. SkinVision, for example, is an app that can instantly diagnose skin issues without the patient having to see a dermatologist in person. Users simply upload an image of a potential skin problem, and the app will use AI to conduct a scan looking for signs of cancer. This assessment will generate a report of low, medium, or high risk, allowing users to immediately notify a doctor when a risk is detected. As more and more pictures are added to the app’s online database, it will be able to diagnose a wider variety of skin conditions with higher accuracy. Additionally, SkinVision encourages users to stay on top of their skin health by setting reminders for users to periodically retake the assessment.
Beyond the personal healthcare apps that we have now, the applications of artificial intelligence in the medical field will only continue to expand and help medical professionals treat patients more effectively. In fact, the total public and private sector investment in AI in the healthcare industry is predicted to reach $6.6 billion by 2021.
Although the future of AI in healthcare is uncertain, one thing is clear: there are many new, exciting breakthroughs that lie ahead.
Photo credit: Medium - Albert Lai
By Vaughn Luthringer
Computer vision and image recognition are pretty common terms nowadays. But their uses go far beyond Snapchat filters. Computer vision is, by definition, “how computers see and understand digital images and videos.” Yes, that can refer to how that dog filter gets put on your face. But, it can also refer to things much bigger, like, say, self-driving cars!
We’ve all heard of self-driving cars, autonomous cars, whatever you want to call them. We’ve heard a lot about the dilemmas that come with them, and the controversy surrounding the “futuristic” devices. What we’ve gotten less insight into is exactly how they work. So, let’s dive in!
Object detection is at the center of the function of self-driving cars. It’s broken up into two parts: object classification and object localization. In simple terms, what is the object, and exactly where is it?
Object classification is done by what is called a “convolutional neural network.” CNNs assign various levels of “importance” to objects in an input image, and are then able to differentiate objects from one another. The use of “sliding windows” allows the CNN to detect more than just singular objects that take up most of the input image. Sliding windows are “boxes” that move across an image, essentially creating smaller images for the CNN to analyze. Check out the header image on this article to see an example of sliding windows!
What about objects bigger or smaller than our boxes? This is where YOLO—”you only look once”—comes into play. YOLO is another algorithm, and it’s used to create a predictive grid, a “probability map” out of an image. YOLO makes predictions about what each cell of the grid is, using probabilities. These probabilities are then used in creating larger predictions of what the objects in the image are.
Now for object localization. Non-max suppression, another algorithm, is used to take into account that objects may span more than one grid cell. Grid cells with probabilities below a certain threshold are discarded, and the cells with the greatest probabilities are kept.
There’s obviously much more to learn about CNNs, YOLO, and non-max suppression. This is just a basic overview, but it does break down the way self-driving cars are able to “see” their surroundings. Using these algorithms, the cars can identify and locate pedestrians, traffic lights, other vehicles, and more.
All of this tech has to come together and function properly in order for an autonomous car to work correctly and safely. Object detection needs to work fast and have a very high accuracy. In the future, speed and accuracy can hopefully be improved so that self-driving cars can get out and on the road!
“A Comprehensive Guide to Convolutional Neural Networks - the ELI5 Way” (https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53)
“How Do Self-Driving Cars See? (And How Do They See Me?)” (https://www.wired.com/story/the-know-it-alls-how-do-self-driving-cars-see/)
Medium - Sumit Saha (https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53)
Medium - Albert Lai (https://towardsdatascience.com/how-do-self-driving-cars-see-13054aee2503)
Image Credit: Adobe Stock
By Kathy Xing
According to Wolters Kluwer, a Dutch American information services company, as many as a quarter of all organizations have incorporated a robot that imitates human conversation, be it a chatbot or a virtual assistant. Natural language processing (NLP) is a type of artificial intelligence meant to understand and mimic human conversational cadences. Today, NLP has various applications in predictive word suggestions and in voice-activated assistants such as Alexa and Siri. However, these purposes of NLP have found new applications during the global pandemic of COVID-19.
During this time, quick access to accurate information is crucial. NLP is able to facilitate the spread of up-to-date information and guidelines regarding the virus because of its applications in accurately translating content into the world’s many languages, especially when it comes to key phrases. Currently, platforms like Google Translate only support translations to 109 languages at various levels. However, on his blog, Daniel Whitenack, a data scientist with a PhD in Mathematical and Computational Physics from Purdue University, describes working with colleagues at SIL International to use Multilingual Unsupervised and Supervised Embeddings (MUSE), a Python library that utilizes multilingual word embeddings to enable NLP training across many languages, to ultimately translate the phrase “wash your hands” into 544 languages. For specifics on this process, follow the below link to Daniel Whitenack’s blog.
Aside from translation, NLP has also impacted access to and spread of information by assisting people’s search for answers regarding COVID-19. Various interfaces to answer COVID-19-related searches have been developed, such as covidsearch by researchers from Korea University and covidex by researchers from the University of Waterloo and NYU. These interfaces answer COVID-19-related questions based on CORD-19, the COVID-19 Open Research Dataset.
Finally, NLP has played a role in public health official’s responses to COVID-19. According to Health IT Analytics, researchers gathered 95,000 posts on a popular COVID-19 Reddit thread and identified 50 different discussion topics using NLP. By using NLP to track popular topics, leaders can better understand public health concerns and priorities as well as address community concerns. Real-time monitoring of platforms such as Reddit can enable faster responses to the various COVID-19-related questions of the general public. Furthermore, similar online platforms have been a source of misinformation regarding COVID-19. Better monitoring of these platforms means that public health officials can better combat and mitigate the spread of misinformation.
Overall, NLP has been applied regarding the spread of information in order to help combat COVID-19. It has played a role in providing people with information globally through accurate translations, answering commonly-searched questions regarding the virus, and gathering information about public discussion of the virus. The way NLP has been quickly applied only furthers the importance of NLP and the role it increasingly has on modern society.
Daniel Whitenack’s blog, datadan.io
Health IT Analytics
By Jenny Kim
With the current pandemic impacting the world, many students are staying at home and therefore have more free time on their hands. For many high school students, this can be a great time to pick up some technical skills in engineering at home. Several websites provide online courses for different engineering disciplines.
Linkengineering provides tutorials for design and engineering projects such as designing a bicycle helmet, investigating insulation materials, or building a psychrometer. This website is designed for students in kindergarten to twelfth grade. By providing in-depth tutorials and explanation guides, this resource helps students explore their creative sides and gain a greater understanding of what the engineering fields have to offer. It describes the differences between electrical engineering, chemical engineering, and other disciplines which can be especially helpful for high school seniors applying to colleges.
NBC Science has a large array of news and videos to help students learn about new developments in the science and technology world. The site consists of videos and articles that cover topics such as DNA, robotics, and new updates regarding the coronavirus. It is especially helpful for visual learners as the website consists of more images and videos about space and the environment than articles.
Last but not least, PBS Learning Media’s Engineering and Technology page provides students with projects and resources that lead to engineering design. This website provides lessons for each grade level. It also includes a section for possible career options that give in-depth explanations of engineering fields which can help older students choose a major and career.
Overall, these websites are great resources for students who are interested in expanding their knowledge of engineering. They provide great beginner to advanced design projects to work on. Additionally, they are useful for high school students who need assistance in choosing their major.
Photo by Florian Olivo on Unsplash
By Manasi Patel
As a result of quarantine, people are turning to online alternatives for work, daily tasks, and entertainment. This includes competitive programming, which involves solving problems that require fundamental algorithms and data structures. Within the algorithm subdivision, quick sort is an important algorithm that can be used to solve many problems.
Quick sort is an example of a sorting algorithm and divide and conquer algorithm. With divide and conquer, a problem is repeatedly broken down into smaller problems until those sub-problems become easier to solve. It breaks down the problems recursively so that the original problem is broken down however many times required to make it simpler to solve. This makes quick sort an effective sorting problem, especially when working with a large set of data. Once the quick sort algorithm is used, there will be a sorted array.
The quick sort process begins by choosing a pivot. The element you choose as your pivot can vary; it could be the center, rightmost, or leftmost element. Following the pivot, the next step is partitioning. After completing both of these steps, the pivot should be in its correct place within the array and all the elements to its left should be smaller than it and the elements to the right should be greater than the pivot. The following link shows an example of choosing a pivot and the partitioning process using numbered cups to follow along with: https://www.youtube.com/watch?v=MZaf_9IZCrc.
While this algorithm is fast, the worst time scenario is 0(n2). 0(n2) represents the amount of time the algorithm takes. This occurs when the pivot is the smallest or largest element in the array, which will result in two unbalanced subdivisions. The worst time scenario will take longer when implementing the algorithm as well as create an unbalanced partition. In order to avoid this from happening, make sure to choose a reasonable pivot. On the other hand, the best time scenario is 0(log(n)). 0(log(n)) represents the time the algorithm takes in this scenario. In this situation, the pivot creates two equal divisions within the array and the pivot chosen is the middle element. These two cases, 0(n2) and 0(log(n)), occur due to the pivot chosen in that particular array. For the 0(n2) scenario, the pivot chosen is usually the leftmost or rightmost element and the array is sorted in the same order, reverse order, or the elements are the same. In the best case scenario, the array has an odd number of elements and thus the pivot becomes the middle value. It can also occur in an array with an even number of elements, with each partition having at most n/2 elements, n being the number of elements.
While the quick sort process is a computer algorithm, it can be done with playing cards or any other item to arrange. Practicing this procedure with daily items can help you understand how this algorithm works and applies to computer science. By numbering each item, for example, using the cards ace through seven, and going through the quick sort process, you will be able to see how to choose a pivot and partition. Overall, quick sort is a fundamental algorithm that can be used in competitive coding problems. This algorithm is effective in organizing large sets of data or elements as well as efficient, as it is faster than merge sort, another sorting algorithm. To access examples and problems, check out programming websites like GeeksForGeeks.
Websites to practice competitive programming:
Photo by Carlos Muza on Unsplash
By Mehak Garg
AI has disrupted numerous industries such as communication, healthcare, and transportation by increasing access to information, creating surgical robots, and making self-driving cars. AI applications such as biometrics, deep learning platforms, and AI-optimized hardware are relatively new innovations that have the ability to make even bigger impacts in hundreds of industries. In order to ensure this impact remains positive, it is imperative that we consider the safety and ethics of AI before implementing it further in society.
In the past, we’ve seen how AI has been misused. For example, in mortgage lending, some AI systems discriminate against minority groups even though they were programmed with the intention of being fair. According to researchers at UC Berkeley, “Both online and face-to-face mortgage lenders charge higher interest rates to black and Latino borrowers, costing those homebuyers up to half a billion dollars more in interest every year.” In 2018, Amazon’s hiring system’s use of AI led to a bias where fewer women would be hired and have their resumes looked at. After the incident, they scrapped their system and rebuilt it to mitigate the issue. AI can even be discriminatory in our search bars. For example, women searching for a job on Google are less likely to be shown executive jobs or leadership roles compared to males.
AI ethics are a set of standards that can guide developers in using AI technologies to ensure that the final product is moral and ethical. Technology can be deemed moral if it treats everyone in society equally and doesn’t stereotype different groups based on their income, gender, or race. In order to successfully deploy a set of ethics to the field of AI, researchers must understand how these biases present themselves and the different ways AI technology can discriminate and be unsafe. There are three different levels of bias. The first level is a historical bias that already exists in the data set. The second level encompasses representation and measurement bias which are a result of how the algorithm is programmed. The third level includes evaluation and aggression biases which are a result of choices made when actually programming the algorithm.
In order to make AI safe to use and ethical, researchers have to optimize algorithms to limit the effect of these biases. Similar to how in real life there are multiple ways to solve a problem, there are multiple ways to program an AI. We can use different learning models for different types of problems. In fact, depending on the model, unsupervised learning might be more discriminatory than supervised learning. Supervised learning is when we train the machine as if it were in the presence of a teacher. Unsupervised learning is when we let the machine act on the data without any guidance.
To combat historical bias, researchers and others involved with the AI project should repeatedly check the data set to make sure it represents a diverse sample. Asking sample test questions while combing through the data set, such as which demographic gets the most loans or if females get more loans than men in a loan-centered dataset, can help you find potential areas of bias in the algorithm. As a final check, companies should monitor the real-world results of their algorithms like the demographics that are actually getting loans. Monitoring results regularly allows companies to act proactively and resolve these biases much more efficiently.
Fortunately, more and more companies have been prioritizing ethics and have outlined what an ethical algorithm functions like. Different organizations, private companies, and researchers have established five goals for every AI system. According to Anna Jobin from the Health Ethics and Policy Lab, an AI system is deemed as ethical or safe if it employs transparency, justice and fairness, non-maleficence, responsibility, and privacy. AI can make immeasurable impacts in fields spanning food production to the defense industry. Similar to how employing ethiccs in Amazon’s hiring system helped reduce biases, coupling AI’s potential with ethical guidelines will result in magnified impacts that can further benefit society.
Image Credit: Nikkei Asian Review
By Tia Jain
With about 2.5 million confirmed cases and nearly 170,000 deaths world-wide, the COVID-19 pandemic has caused a global catastrophe, as healthcare facilities find themselves struggling to treat overwhelming numbers of patients with insufficient supplies. Nobody would have thought that this year, words like "social distancing" and "quarantine" would become commonplace. In the midst of this global pandemic, many scientists are wondering if and how AI can be applied to help solve COVID-19. However, since AI is still a budding field, much of its potential still remains unexplored, particularly its application to healthcare.
Upon closer inspection of the pandemic's growth, the Harvard Business Review reveals that the main reason why the virus was not contained in its infancy is because our global economy and health care systems are "geared to handle linear, incremental demand," while COVID-19 grows at an exponential rate. Our national health system cannot keep up with this kind of demand, so naturally, many researchers have turned to technology, specifically AI, to find solutions that are both effective and have global coverage.
The first step in handling any virus is being able to diagnose it properly. According to the MIT Technology Review, COVID-Net does exactly that. Developed by Linda Wang and Alexander Wong at the University of Waterloo and the AI firm DarwinAI in Canada, COVID-Net is a convolutional neural network that can help spot COVID-19 in chest x-rays. A convolutional neural network (CNN) passes an image through a series of fully connected layers to get a classification. This makes image classification significantly easier; image classification is a process in computer vision that can classify an image into one of many pre-established categories according to its visual content. For example, a CNN used for image classification may be able to classify an image as a dog if it contains four legs, ears, a tail, fur, and other dog-like characteristics.
In terms of the training database for the model, COVID-Net, specifically, was trained on 6000 images from varying lung diseases. In general, the more data you train your model on or the larger your dataset is, the more accurate your results will be. In terms of the percent of total data that should be used for training versus testing, the ratio is typically 80:20. Reserving 20% of data only for testing prevents an common issue in ML called overfitting, which is when the model reflects the training data too well, consequently performing poorly on the testing data.
Once you identify who has the virus, the second step is stopping the spread of the virus. According to the MIT Technology Review, Andrew Ng's startup called LandingAI can alert people if they are not "social distancing," or staying six feet away from another person. Once embedded in a security camera system, a trained neural network identifies people from a scene in real-time and after being calibrated to real-world dimensions, a second algorithm computes the distances between them.
In addition to addressing COVID-19, AI can also be used for several healthcare-focused solutions. For example, doctors at OrthoAtlanta have recently begun to use Suki, which is "an AI-powered, voice-enabled digital assistant for doctors that is designed to ease the burden of documentation." This enables doctors to focus on treating patients, rather than writing down notes while the patient is speaking. Suki effectively understands voice commands and uses them to create clinically accurate notes that upon confirmation, are inputted into the electronic health record system. Clearly, as AI advances, more healthcare systems are leveraging its power, revolutionizing the way that doctors treat their patients and diagnose illnesses.
That being said, despite the abundance of benefits that the use of AI provides, it is important that consumers are always wary of the accuracy of their AI-generated advice. In fact, the most pervasive issue is that people might take AI-generated advice too seriously. For example, if an AI system tells someone that their symptoms pertain more to the normal influenza rather than the Coronavirus, the individual should seek medical help if they are still concerned or feel otherwise. It should be known that currently, AI is not advanced enough to substitute for a physical checkup by a human doctor. So while AI can help you understand or be a base guideline in interpreting your symptoms, it is in no means a replacement for humans; always take advice with caution.
Second, the use of AI presents ethical issues around storing personal data. Although LandingAI is an incredible application of machine learning to increase safety, it also raises many privacy concerns. Specifically, since LandingAI gathers data real-time from real people, there must be an appropriate consent process. It is unethical to film anyone without their consent, so before being implemented in any public setting, all users must agree to be recorded for safety purposes, or else their personal rights are being violated.
Every day, breaking news stories regarding COVID-19 are on headlines of every news station. With so much information being provided to us, it can be difficult to figure out which sources of media to trust. If you are interested in reading specifically about the latest coverage of the Coronavirus and tech, subscribe to the New York Times Coronavirus Briefing or the Algorithm from MIT Technology Review Coronavirus Newsletter for a reliable rundown of the newest updates! Also, make sure to practice social distancing and to adhere to the safety guidelines given by the CDC and the WHO.
A final question for the readers: If you could apply AI to either the detection, spread, or treatment of COVID-19, which would you pick? What ideas do you have in applying any machine learning principles to increasing social welfare and decreasing the growth in the number of cases? Also, do you think that the pros of using AI outweigh the cons? I'd love to hear from you. Feel free to share your thoughts in the comments below!
Thank you for reading and stay safe!
If you are interested in reading more about the research I mentioned in the article specifically, check out the following links!
Photo by Sean O. on Unsplash
Note: this post is geared towards middle- and high-school students.
In the past few months, many middle- and high-school students around the country have had their summer camps, jobs, or other activities cancelled due to the COVID-19 pandemic. If you’re one of them, you’re probably wondering: What now? Here at Allgirlithm, we want to help you make your summer fun and rewarding. Here’s a list of computer science and artificial intelligence-related activities you could try this summer:
1. Expand your Coding Portfolio
One thing you could do this summer is expand your coding portfolio! You could do so by working on side projects—for example, apps, games, or algorithmic puzzles. This will often expose you to new technologies and solutions, and improve your coding abilities and confidence. Here are specific ideas, along with some possible programming languages and IDEs:
And if you ever find yourself stuck on a coding or design problem, you can always consult StackOverflow or Reddit!
2. Create a Personal Website
Creating a personal website is one of the best ways to showcase the work you’ve done and highlight your accomplishments. If you’re completely new to web development, you could give Weebly and Wix a try—both are easy-to-use, WYSIWYG (“what you see is what you get”) website builders. Or, if you’re feeling a bit more ambitious, you could try Wordpress, a blog-focused service that allows for a bit more customization.
3. Conduct Remote and/or Self-Guided Research
Another thing you could do this summer is conduct research at home! Your research could be related to anything, but here are some CS/AI-related ideas:
4. Contribute to Open-Source Projects, including Coronavirus-Related Projects
GitHub has millions of public repositories for open-source projects you might want to contribute to! These include everything from fun and quirky web extensions to IDE plugins to machine learning software packages for technologies like Tensorflow.
Tons of different coronavirus-related projects have emerged in light of the pandemic. You could look through some open-source repositories—many of them are tagged “coronavirus”, “covid”, or something similar—and contribute your code or non-technical work to any that interest you. This Github repository has a list of coronavirus-related projects; be sure to check it out!
5. Participate in Online Hackathons and Coding Competitions
If you go to Devpost and scroll down, you’ll see a list of online hackathons taking place in the near future—for example, hack:now and HackDSC. Take a look at these and see which ones you’re interested in! Many of them offer cash prizes and tech gadgets for winners.
A number of websites host coding competitions on scheduled days throughout the year, as well as training programs and practice problems open to users anytime. Some examples include Topcoder, CodeForces, CodeChef, HackerRank, USACO, and USACO’s Training Gateway. Training for and participating in coding competitions will improve your algorithmic thinking and problem-solving skills, which will help you in many of your other endeavors!
6. Take an Online Course
Coursera, edX, and Udemy are all great websites offering a variety of online courses. Codecademy is also offering Codecademy Pro for free during the remainder of this semester.
If you want to try some college-sponsored courses, MIT OpenCourseWare, Harvard Online Learning, and Stanford Online are great places to start. One of the most popular CS MOOCs of all time is Harvard’s CS50 class, an introductory class in programming, algorithms, and data structures.
Note: many of these activities require Internet connection. Some Internet service providers are offering free or low-cost programs. Here are a few links to information that may be of interest to you; it may also help to check with your school district or city/state government for more information.
1. Your Guide to Internet Service During New Coronavirus (COVID-19) Outbreak
2. Get online during the coronavirus outbreak
3. Comcast, AT&T, Sprint offering free or low-cost internet for students amid COVID-19 crisis
Image Credit: Complex Magazine
By Ore James
With most teens stuck at home in the midst of a pandemic, Netflix and other streaming services have become somewhat of a refuge. As browser extensions like Netflix Party facilitate remote interactions with friends, the relative normalcy provided by streaming may explain its popularity among the quarantined - of course, the data science behind these recommendations also plays a role in keeping us glued to the platform.
Most familiar with Netflix have probably seen genres like “critically-acclaimed movies about friendship” or “comedies for hopeless romantics” mixed into their homepages. However odd these micro-genres may seem, there exists a solid method behind them - personalized recommendations rely on machine learning algorithms to keep subscribers engaged, aiming to prolong our binge-watching sessions.
Recommendation systems are simply platforms to suggest content based on existing user preferences. The Netflix recommendation system compresses its large streaming library into personalized, easily-navigable rows using machine learning.
Within machine learning, systems regularly rewrite their algorithms - or data-based instructions - according to user data. Essentially, the systems collect data from users, learn from the data, and apply what it learned to make decisions. As Todd Yellin, Netflix’s VP of product innovation, told Wired in 2017, “What we see from those profiles is the following kinds of data — what people watch, what they watch after, what they watch before, what they watched a year ago, what they’ve watched recently and what time of day.” In addition to this data, Netflix relies on man-made tags, which categorize the service’s content, to determine the types of media users prefer. Taking all of this into account, machine learning algorithms interpret these data sets to ultimately decide which content to recommend.
Machine learning can take on highly specific forms to maximize user experiences. Recently, for example, Netflix incorporated an artwork-based algorithm to further personalize recommendations. The algorithm uses user preferences to determine which artwork will appear next to movies and shows. Machine learning comes into play when the algorithm chooses the art - data suggesting a user likes horror movies, for instance, may compel the algorithm to choose dark and chilling artwork for “Stranger Things.” By incorporating artwork, the algorithm demonstrates data can combine with a sense of creativity to further increase user engagement.
Image Credit: UX Planet
The downside to the Netflix approach of highly specific suggestions, as many subscribers have observed, is the fact that data from a customer's watch history often fails to reveal their actual tastes. For instance, within the current system, users who accidentally click on a documentary may face homepages cluttered with docudramas they have no interest in watching. The potential for users to miss out on content - or, conversely, face a homepage of content they don’t want to see, presents a glaring flaw in recommendation systems that can ultimately harm the subscriber experience. Content creators have voiced similar concerns - when Netflix canceled sitcom “Luca and Bertie” after one positively reviewed season in 2019, show creator Lisa Hanawalt pointed to the algorithms as a cause for low viewership.
Of course, machine learning is just one subset of artificial intelligence. Deep learning is a sector of machine learning in which a machine uses artificial neural networks, inspired by the brain’s neural networks, to “train” itself to make more accurate predictions. In practice, deep learning enables artificial intelligence to “think” and learn. While the technology is more commonly used to identify photos or audio, services like Movix.ai employ deep learning to recommend movies by adapting to user preferences in real-time, aiming for more accurate movie recommendations. Netflix itself seems to be slowly moving away from strict machine learning, following competitors like HBO Max; in August 2019, the service began beta-testing a Collections section, which relied on humans, not algorithms, to group titles for users.
While Netflix’s complex algorithms currently sort its thousands of titles well enough to keep many of us on the platform, it’s clear the future possibilities of machine learning, deep learning, and creativity in streaming are endless.
Photo Credit: MIT Technology Review
By Angela Choi
Of the 310 million patients from all over the world that undergo surgery each year, recent research has shown that 50 million of these patients suffer surgical complications, which widely range in severity. But with the use of AI-assisted robotic surgery on the rise, we can’t help but wonder: how much of an impact could surgical robots have on this data?
Given that robots have the superhuman ability to repeat precise motions without fatigue, they can help reduce the effects of accidental movements by surgeons. Thus, robotic procedures are most commonly used with minimally invasive surgeries, which are performed through tiny incisions. This allows surgeons to perform extremely delicate and physically demanding procedures with much more accuracy and control than they could with any other traditional techniques. Not only does this reduce the chance of complications for patients, but it also provides the benefits of quicker recovery, smaller scars, and less blood loss and pain.
Furthermore, through new breakthroughs in artificial intelligence, robots can also use data from previous operations to create improved surgical techniques and identify ways to further reduce risk. AI and machine vision can help analyze scans to detect cancerous cases, and surgical robots can also provide real-time guidance, warnings, and advice to surgeons through thousands of prior cases stored in the cloud for access. For example, a visual overlay inside the surgical space could point out the location of critical blood vessels behind the current operating plane. In this case, the AI would suggest that the surgeon avoid these specific areas by showing how successful surgeons in the past have traversed the anatomy and by being aware of the specific tools needed to take action.
One specific application that has already proven to be effective is with delicate eye surgeries that treat age-related macular degeneration, the leading cause of severe vision loss for people over the age of 60. Robotic systems can successfully remove membranes from a patients’ eyes or blood underneath the retina, and in some cases are even more effective than manual procedures.
Even with the significant growth of support for AI-assisted robotic surgeries as a safe alternative, they also involve some risk, some of which are similar to those of conventional surgery methods. Although mechanical malfunction or failure occurs at an extremely low rate in less than 1% of cases, it can still potentially happen and cause infection or other complications. Another possible issue is that these complex methods for action recognition require samples like videos that are manually labeled, which is both time-consuming and leaves potential for human error.
As with all other applications of artificial intelligence, it is important to recognize the possible dangers that come with robotic surgeries, but the promise of AI to improve outcomes in healthcare and provide unprecedented solutions also should not be overlooked. Support for these new advances from governments, tech companies, and healthcare providers will only continue to expand as more and more robotic procedures are successfully tested by surgeons. The future of AI-assisted surgical robots offers much more than what we currently envision of a surgeon’s hands, but only time will tell what innovative changes will result as these new technologies are embraced.
Robotics Business Review
Bernard Marr & Co.