In a recent article in Forbes on how AI is helping to fight the coronavirus, contributor Jun Wu describes how BlueDot, an AI database company, uses AI-powered algorithms, machine learning, and natural-language processing to give warn people about how—and where—the coronavirus would spread.
On New Year’s Eve, the company was able to correctly warn its customers to avoid Wuhan, China, which was about to be the first epicenter for COVID-19. Based on inputs of airline flight paths and known travel itineraries, BlueDot was also able to project the trajectory of the epidemic.
BlueDot’s Insights feature sends users real-time alerts with his or her risk of exposure to infectious diseases based on quantified data. Image used courtesy of BlueDot
Artificial intelligence can help to chart the course of epidemics and advise epidemiologists as to where and when the novel coronavirus will spread next. It is also implemented on MCUs, allowing the host device to make decisions locally and eliminate the inevitable latency involved in deferring decisions to cloud-based processors.
With the current emphasis on how technology is making an impact on COVID-19, it may be worth revisiting what AI technologies at the edge are newly available.
AI and Healthcare
AI sits atop supercomputers analyzing the world’s most complex issues, and it also resides on remote MCUs, deciding a robot’s next move or analyzing information garnered from a sensor.
The critical space where a machine meets the physical world is known as “the edge.” You might envision the edge as the place where a precision device implements a 7-nanometer integrated circuit trace. AI at the edge refers to machine learning occurring directly on a device instead of on a local server. Sending information to a remote, cloud-based processor for analysis can take too long, be too power-intensive on cloud infrastructure, and cost too much.
An article published in the Future Healthcare Journal describes the many ways AI and related technologies are fast becoming part of mainstream medicine. Machine learning, for example, is already being used to discern treatment protocols based on symptoms and statistics.
AI-based surgical robots can, in many instances, perform surgical manipulations on a microscopic scale not possible with any human hand. But accomplishing feats at this level, where machines directly interface with the physical world, requires that AI must be installed on MPUs that directly control physical machines.
One example that we recently covered is Arm’s addition of machine learning and neural processing IP to its AI platform. Arm’s MPUs are at the heart of countless electronic devices worldwide. Nobody is in a better position to affect the migration of AI from remote, cloud-based servers to the edge, where decisions can be made at the nexus of the device’s action.
Here are a few other recent examples of edge AI that may eventually trickle into medical spheres. Note that none of these companies explicitly state pandemic prediction or healthcare assistance as a use case. But with the overwhelming number of COVID-19 cases, might this new technology be used to that end?
CEVA’s Integrates AI in Popular Cores
CEVA’s WhisPro speech recognition software runs on the company’s CEVA-BX DSP cores. The CEVA-BX DSP family are programmable hybrid DSP/controllers that employ 11-stage pipelines and 5-way VLIW (very long instruction word) micro-architecture. They offer parallel processing with dual scalar compute engines.
Block diagram for the CEVA-BX DSP core. Image used courtesy of CEVA
CEVA has announced that this powerful pair can now support TensorFlow Lite for Microcontrollers. With this now ubiquitous format, AI models can be chosen from ready-models or developed by users, converted to run on the CEVA-BX, deployed, and optimized.
Erez Bar-Niv, Chief Technology Officer at CEVA, stated: “The increasing demand for on-device AI to augment contextual awareness and conversational AI workloads poses new challenges to the cost, performance and power efficiency of intelligent devices. TensorFlow Lite for Microcontrollers dramatically simplifies the development of these devices, by providing a lean framework to deploy machine learning models on resource-constrained processors.”
He continues, “With full optimization of this framework for our CEVA-BX DSPs and our WhisPro speech recognition models, we are lowering the entry barrier for SoC companies and OEMs to add intelligent sensing to their devices.”
SolidRun and Gyrfalcon Team Up on AI Inference Server
According to Dr. Atai Ziv, CEO of SolidRun, the market is saturated with new AI models that demand AI inference solutions. “While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions,” Ziv explains.
To solve edge AI issues related to power consumption, cost, and server real estate, edge computing solutions provider SolidRun and ASIC provider Gyrfalcon Technology Inc. have teamed up on an “edge optimized,” Arm-based AI inference server, the Janux GS31. This AI inference server, which features Gyrfalcon’s Lightspeeur 2803S neural accelerator chips, is designed to support modern neural network frameworks, low-latency decoding, and video analytics.
Block diagram of the Janux GS31. Image used courtesy of SolidRun
These systems are for AI applications where an external AI server is used, but the latency penalty imposed by referring to a remote, cloud-based server would be prohibitive. The two companies foresee this technology being of use for monitoring smart cities (including smart hospitals) and other infrastructure.
BrainChip and SocioNext Double Down on AI Edge Platform
BrainChip and SocioNext are two other companies that have teamed up to provide an AI platform for AI edge applications. Brainchip has developed a neural network processor for “local inference and incremental learning” while both Socionext and BrainChip put their heads together for the Akida SoC. BrainChip has also provided technical support and training for the Akida SoC, including network simulation (using the Akida Development Environment) and FPGA emulation.
Block diagram of the Akida neuromorphic SoC. Image used courtesy of BrainChip
Socionext has recently pored their efforts into side-stepping the pitfalls of cloud computing with new edge computing devices, including the SynQuacer SC2A11.
The two companies plan to integrate Brainchip’s Akida SoC with Socionext’s multi-core processor, SynQuacer SC2A11, for “high-speed, high-density, low-power systems to perform image and video analysis, recognition and segmentation in surveillance systems, live-streaming, and other video applications.”
Will AI Rise to the Coronavirus Challenge?
While none of these collaborations explicitly cited pandemic predictions as the end goal for their technologies, their innovations have hundreds of use cases that may eventually directly impact the healthcare industry. The role that AI at the edge will have on many levels in medicine is unlimited.
On a macro level, the authors of the paper on the potential for artificial intelligence in healthcare explain that AI can predict and track pandemics and aid doctors in diagnosis and treatments. It can also power innumerable smart devices, from ventilators to diagnostic equipment. And, as described, it can serve as the intelligence behind a surgeon’s trusted artificial arm.
In the future, the needs of COVID-19 patients may well exceed the capacity of any conceivable number of nurses, doctors, and aides. As such, the emergence of AI-empowered, patient-care robotic devices can readily be anticipated as well.
What’s your take on AI’s increasing role in healthcare—especially as medical professionals are running short during the COVID-19 crisis? Share your thoughts in the comments below.