Photo Credit: MIT Technology Review
By Angela Choi
Of the 310 million patients from all over the world that undergo surgery each year, recent research has shown that 50 million of these patients suffer surgical complications, which widely range in severity. But with the use of AI-assisted robotic surgery on the rise, we can’t help but wonder: how much of an impact could surgical robots have on this data?
Given that robots have the superhuman ability to repeat precise motions without fatigue, they can help reduce the effects of accidental movements by surgeons. Thus, robotic procedures are most commonly used with minimally invasive surgeries, which are performed through tiny incisions. This allows surgeons to perform extremely delicate and physically demanding procedures with much more accuracy and control than they could with any other traditional techniques. Not only does this reduce the chance of complications for patients, but it also provides the benefits of quicker recovery, smaller scars, and less blood loss and pain.
Furthermore, through new breakthroughs in artificial intelligence, robots can also use data from previous operations to create improved surgical techniques and identify ways to further reduce risk. AI and machine vision can help analyze scans to detect cancerous cases, and surgical robots can also provide real-time guidance, warnings, and advice to surgeons through thousands of prior cases stored in the cloud for access. For example, a visual overlay inside the surgical space could point out the location of critical blood vessels behind the current operating plane. In this case, the AI would suggest that the surgeon avoid these specific areas by showing how successful surgeons in the past have traversed the anatomy and by being aware of the specific tools needed to take action.
One specific application that has already proven to be effective is with delicate eye surgeries that treat age-related macular degeneration, the leading cause of severe vision loss for people over the age of 60. Robotic systems can successfully remove membranes from a patients’ eyes or blood underneath the retina, and in some cases are even more effective than manual procedures.
Even with the significant growth of support for AI-assisted robotic surgeries as a safe alternative, they also involve some risk, some of which are similar to those of conventional surgery methods. Although mechanical malfunction or failure occurs at an extremely low rate in less than 1% of cases, it can still potentially happen and cause infection or other complications. Another possible issue is that these complex methods for action recognition require samples like videos that are manually labeled, which is both time-consuming and leaves potential for human error.
As with all other applications of artificial intelligence, it is important to recognize the possible dangers that come with robotic surgeries, but the promise of AI to improve outcomes in healthcare and provide unprecedented solutions also should not be overlooked. Support for these new advances from governments, tech companies, and healthcare providers will only continue to expand as more and more robotic procedures are successfully tested by surgeons. The future of AI-assisted surgical robots offers much more than what we currently envision of a surgeon’s hands, but only time will tell what innovative changes will result as these new technologies are embraced.
Robotics Business Review
Bernard Marr & Co.
Graphic Credit: Getty Images
By Kathy Xing
Natural language processing (NLP) is a branch of artificial intelligence that broadly focuses on interactions between human language and computers. It has a broad goal of enabling computers to understand and derive meaning from natural language—the way humans communicate—in a smart and useful way.
According to Forbes, NLP first arose as machine translation in the 1950s as was meant to help with code-breaking. In the 1960s, language programs like SHRDLU successfully enabled user interaction in a block game where the computer would respond to requests to move or manipulate blocks. ELIZA, the first chatbox, was also developed during this time. Up until the 1980s, hand-written rules and parameters guided NLP, but the introduction of machine learning algorithms and statistical NLP enabled the shift to NLP as we know it today.
In modern times, NLP combines artificial intelligence with computational linguistics and computer science to analyze human language. According to Investopedia, this can be broken down into a series of tasks. The computer first needs to understand the language received with a built-in statistical model that breaks speech down into tiny units in order to statistically find the most likely words and sentences that were said. Then, the computer defines words grammatically as nouns, verbs, etc. with coded lexicon rules. After these first two steps, the computer is able to gain a general idea as to what was said. Finally, the computer programming language must be converted to an audible or textual response from the user input.
Currently, NLP has countless applications and is used in predictive word suggestions on mobile devices and Google searches as well as voice-activated assistants like Siri. Millions of people are incorporating smart speakers like Alexa into their homes; this technology is entirely based around NLP as it intakes voice commands and uses algorithms to decipher the meaning and provide an appropriate response. More sophisticated chatbots that help answer customer questions are also on the rise. In fact, according to a survey from Oracle Corporation, 80 percent of sales and marketing leaders have implemented or plan to implement chatbots in order to better serve customers.
While much progress has been made on NLP since the 1950s, there are still various difficulties. Human language in and of itself is ambiguous, and this ambiguity is the cause of most difficulties with NLP. For example, at the word level, it is challenging for a computer to distinguish whether “board” is being used as a noun or a verb, and at the sentence level, it can be difficult to understand whether “He lifted the beetle with a red cap” means that the beetle was moved with a red cap or if the beetle that was lifted had a red cap. As a result, even though many prominent companies are improving upon NLP and have even created successful products based around NLP, there still has not been success in creating a holistic cognitive platform that understands human language at the level of an actual human.
Image Credit: Microsoft Research YouTube channel (InnerEye software)
By Vaughn Luthringer
When we think of computer vision, facial recognition-what your iPhone uses to unlock your phone-is what comes to mind, at least for me. Computer vision is “how computers see and understand digital images and videos,” and we often associate it with the tasks our phones are capable of-identifying faces, pets, etc. Having such advanced technology right at our fingertips is amazing-but computer vision has applications far beyond a phone. Applications as far as hospitals, in fact.
In a tech world increasingly dominated by artificial intelligence, computer vision was bound to reach the medical world at some point. Now that it has, it’s being used to diagnose and treat patients.
MaxQ AI is a small company that offers software capable of identifying abnormalities in a patient’s brain. The algorithm used to run the software is trained with millions of brain scans uploaded by developers. Put to the test, it will point out irregularities in a scan uploaded by a medical professional. With this, the patient can be given effective treatment based on the conclusions drawn by MaxQ AI’s software.
The software is still in the process of being approved by the US Food and Drug Administration, but MaxQ AI CEO Gene Saragnese hopes that existing partnerships with Samsung, General Electric Company, and the International Business Machines Corporation will allow the software to benefit up to seventy-five percent of hospitals in the world.
Microsoft’s InnerEye is another computer vision application aimed at identifying irregularities. Given a three-dimensional scan, the software can calculate the dimensions of the organ or other body part displayed. Then, it can pinpoint tumors and abnormalities. Like MaxQ AI’s software, InnerEye requires lots of training data and can be extremely useful to medical professionals.
Triton, another computer vision-based software, is offered by Gauss, and can aid in tracking blood loss during surgery-and it’s all run from an iPad! With the software, a sponge full of blood can be analyzed to produce the patient’s blood loss and rate of blood loss. Trained from sample data, Triton is able to draw these conclusions by estimating how much blood is concentrated in the sponge being held.
In a study on C-section patients, Triton identified more hemorrhages than the naked eyes of surgeons and allowed adjustments in treatment. Additionally, patients who Triton was used on had generally shorter hospital stays.
Full circle, back to facial recognition! AiCure is a startup that allows a patient’s ingestion of medication to be monitored. In front of a phone camera, the patient takes their prescribed medication, and facial recognition technology identifies the process. In the context of clinical trials, AiCure’s software can help researchers track the number of people who drop out.
Computer vision has a home in the hospital. Applying advanced artificial intelligence to the medical field allows targeted treatment, faster diagnosis, and more. The eyes of a trained algorithm can aid in creating a more personalized medical plan, allowing for more effective and efficient treatment.
Image Credit: photodune.net
By Jenny Kim
Due to the increasing use of technology in our daily lives, many more people are becoming more and more interested in the world of coding, regardless of their age. Although signing up for in-person courses sounds like a good plan, there are, surprisingly, a great deal of websites for coding online.
First, LearnPython.org is a very straightforward website that anyone can use. Python is known to be one of the easiest coding languages to learn, and it is often used for college introductory coding classes. It is also very popular for being a general-purpose programming language. Using this website, you can learn to code in Python and multiple other coding languages that are offered on the website. The tutorials are based on categories such as basics or data science so that you can choose what to learn. Using the useful table of contents, you will always be able to keep track of your progress and where you left off.
Next, a site called Udacity offers a plenty of resources regarding artificial intelligence and coding. Although not all lessons are free, there are certain lessons that are such as Web Development and Mobile Development. Additionally, among the many different options, you can find the lessons that work best for you.
Lastly, Khan Academy has coding resources that are catered to younger students who are interested in learning skills in technology. Using the website, you will be able to gain a firm understanding of the fundamentals of coding and computer science. For younger students, Hour of Code provides an easy introduction to coding using games. For older students, the AP Computer Science Principles course is a more challenging option.
As learning to code can not only make you more knowledgeable but also help you to boost your mental health, we hope these resources provide a great starting point for your coding journey!