An interview with our Head of Artificial Intelligence, Amin Ojani. Amin has been with Nectarine Health since July 2020, leading our team of AI and Machine Learning engineers to implement cutting-edge technology towards our goals of keeping seniors living independently in their homes for longer. Find out more about what AI and Machine Learning mean, and how we at Nectarine Health utilize these technologies in our product from an expert in the field.
Amin, what is it like working at Nectarine Health?
Nectarine Health is a multicultural, fast-paced, and inspiring start-up. Our goal is to give older people the confidence to live independently in their own homes for longer. We believe that better, more intelligent tools can help society maintain high quality care for its aging global population. We started off by providing this solution for nurses, caregivers, and administrators in nursing homes to help them look after their residents more effectively, enabling the delivery of better, more timely care. We now have billions of data points and insights into patterns of sleep and activity, allowing us to detect falls and provide alerts with ever greater accuracy and make a proactive assessment of well-being possible.
What does the AI team do? What roles do they have and what does the job mainly entail?
What makes working in the AI team especially interesting is the fact that we have an IoT (Internet of Things) platform that is always on and operating continuously. This means that we have a unique and rapidly growing dataset of daily behaviors from a variety of sensors. Additionally, we are consistently developing state-of-the-art Machine Learning (ML) algorithms and adding new AI features for our product, which is very exciting to be a part of.
My team of ML engineers is working with end-to-end ML problems, covering a variety of skill sets. This includes data preprocessing, feature engineering, model development, databases, data pipelines, distributed computing, model deployment, dockers and container orchestration, scalability and performance monitoring, and cloud infrastructures.
What exactly is AI?
Artificial Intelligence (AI) has been around since the 1950s. There are several approaches to AI. In the knowledge base approach, the information about the world is hard-coded in formal languages by a human supervisor, and computers automatically reason about statements in these formal languages using logical inference rules. However, it is extremely difficult to design formal rules with enough complexity to describe the world accurately, and hence, this approach has not always resulted in great success. On the other hand, the ML approach to AI has led to major achievements in many applications during recent years. This has been mainly due to the rise of Big Data and advancements in computing power.
Machine Learning is the type of Artificial Intelligence that we utilize in the Nectarine Health™ at Home care system, what is Machine Learning?
ML is a data-driven approach to AI based on the idea that the system can learn from data. It enables systems to acquire their own knowledge from the environment, by extracting patterns from raw data. Performance of ML algorithms depends on representation of the input data. Features are pieces of information that form the data representation. Simple AI problems are solved by designing and engineering a suitable set of features, and providing them as inputs to ML algorithms.
How does Nectarine Health implement ML?
For more complex AI applications, like some of those that we are developing for our products, it is difficult to know exactly what features should be designed. Also, designing features manually for such tasks requires a great amount of time and effort from human experts. Therefore, Representation Learning is one solution to this problem, which can automatically extract a proper set of features for the problem at hand. This is where another concept, Deep Learning, comes into play. It is a special ML method that is based on Artificial Neural Networks.
This perceives the world in terms of a hierarchy of concepts, which allows the AI system to learn complex concepts by constructing them from simpler concepts. These concepts are then built on top of each other, forming a deep network of many layers. Compared to classic ML methods, deep learning has a greater capacity to learn much more complex concepts, and needs a large dataset to reveal its potential.
How does ML add to the functionalities of our product?
We at Nectarine Health are collecting live data using our wearable devices 24/7. Our increasingly large and growing dataset is of much value to us as an AI company, as it allows us to utilize state-of-the-art ML methods for our AI tasks.
Taking fall detection as an example, we design, train, and tune our Deep Neural Network architectures, so that they can automatically extract suitable motor features from raw accelerometer data, and use this learned representation to discover patterns of fall anomalies in unseen data.
What are you the most proud of having accomplished in the work leading up to the soon-to-be-launched product?
The development of AI algorithms and models that can extract valuable well-being insights is the core activity of the AI team and a core value for Nectarine Health. There are a variety of sensors embedded in our wearable devices that provide us with a unique dataset enabling continuous improvement of the current models for the AI features as well as the future betterment of our product.
More specifically, we have developed ML models that can provide insights about daily activity and the quality of sleep of our users. Using these analyses, we can further discover and extract possible underlying long-term patterns that are discoverable in sleep and activity analysis. This can in turn act as an early indication that a trend is forming in their wellbeing, suggesting that they perhaps want to consider having a checkup.
What challenges did you encounter along the way? In what sense? What did you do to tackle the challenges and come to the point where we are today?
Working with such a live stream of large sensor data has been a challenge for us in terms of processing power for real-time computation, and scaling the required infrastructure accordingly. However, the fact that we are collecting data 24/7 provides us with a big dataset. This can help us build more complex models that work well and, in turn, continuously improve our models.
Therefore, to tackle this challenge, we have put a great deal of effort into redesigning and implementing our production data pipeline using large-scale data processing engines to run on our Cloud infrastructure. This supports the scaling of our ML tasks and performs distributed computing of our streaming data. The resulting data pipeline is now allowing us to have the otherwise slow and time-consuming tasks run much faster and more efficiently, enabling our real-time analytics.