Tell us a bit about yourself, your background, where you work, and what you do there.
My journey is pretty varied and spans a couple of different areas, structured across a few different types of organizations, industry domains, and technologies. I’ve worked across R&D innovation & incubation teams, enablement groups, Office of CTO (OCTO), product design divisions, venture capital accelerators, professional services, and many more.
From an industry perspective, I spent my early career designing wireless technologies and embedded systems for consumer experiences, tracking people, Nascar truck racing, and for a variety of semiconductors, OEMs, and third-party organizations. I’ve spent time working for deep learning tooling frameworks and infrastructure organizations in both traditional structured, vision, text, and time series modalities, along with deep reinforcement learning (DRL). Further, I spent time working for financial organizations across consumer and small businesses to high wealth investment trading, operating in a regulated environment at scale.
At Beyond Limits, I’m part of the AI Solutions team that sits under the Office of CTO (OCTO) umbrella, leading our clientele’s research-driven initiatives. In addition to the technical aspects of AI research, we also focus on other dimensions, like strategy, processes, and communicating narratives. I particularly focus on setting the client up for success and guiding them along the right path in optimizing the outcomes they want to achieve across both short-term and long-term horizons.
What career advice would you give to your past self?
I probably would have gone down a different track than I did originally, and that may have led me towards a different path. Looking back, while I have had a number of gigs already, I think I would have taken even more risks and perhaps ventured to spin out a company or two of my own, as I’m heavily driven by new experiences in both my personal and professional life.
How does ML impact your business? Why is it important, and what does it help you achieve?
Beyond Limits has always been an AI-first company and AI technology provider since we spun out of NASA’s JPL Reasoning Lab in 2014. You can find some of our early work on the Mars Rover. Because of that, you’ll see a lot more focus on the reasoning side and cognitive AI, and we marry that with our traditional data-hungry methods.
Many of the enterprise organizations we work with have different levels of sophistication and maturity along their journey. They have usually attempted a few other partner solutions or built them internally but did not receive the outcomes they desire. Sometimes we’ll get clients that are after a little more than a data hungry-type method. That’s maybe where we’ll start, and then we’ll see where their pain points are or what we need to invest more in other areas for value drivers, and then we’ll marry that with more cognitive AI down the road.
For some customers who aren’t ready to jump on that cognitive journey, we’ll approach it from more of a traditional sense at first. Where a lot of the data-hungry methods finish off with prediction, basic evaluation, and some level of explainability, then you say, “well, how can I improve upon this?” And that’s where a lot of the more knowledge-driven methods come in.
What are your ML use cases?
We develop industrial-grade enterprise AI software across a number of industries, modalities, and use cases. We have had the most success in industrial, particularly oil and gas, as referenced by our flagship product lines (Lubricant Formulation Advisor and Luminai Refinery Advisor). However, we operate across a number of other industries, including power/utilities, manufacturing, and industrial IoT, finance, healthcare, and government, among others.
Much of our intellectual property is based on our cognitive knowledge-driven AI reasoning block — a reasoning component instrumenting knowledge-driven prior expertise — that can be offered as a companion technology to data-driven techniques. This helps pick up where data-driven methods are insufficient. We call this Hybrid AI, and between the two methodologies, we can deliver better decision-making outcomes.
This particularly helps leverage the intelligence and insights in cases of data scarcity, when new scenarios need to be modeled, if statistical patterns are unable to be well deciphered, or in situations with a high level of uncertainty, risk, and safety, as in more regulated environments.
However, we do have use cases that focus on more traditional, data-driven hungry methods as well. These include computer vision tracking at the edge, financial forecast trading, or intelligent energy scheduling and power utilization.
What questions should organizations adopting ML be asking themselves to replicate your success?
“What does a good data culture look like?” While we have our foundations in research, we stitch together our different divisions of research, product, clientele services, and design to deliver great experiences for our enterprise customers.
“How do we marry the strategic and tactical?” As I mentioned earlier, we operate both strategically and tactically, so while we often lead with strategy and utility, we link and map that toward a technical implementation perspective to ensure purpose-led product and customer outcomes.
“Where should you focus your investments and resources?” We receive all types of requests, both internally and externally, so we need to be smart and strategic about where we invest our time and resources while keeping scale, repeatability, and efficiency in mind. We are still a somewhat small company, with 200+ folks, plus a few complimentary acquisitions to expand our footprint.
What teams do you have in place, and what part of the ML process does each team do?
Within the OCTO, we have 30+ members, made up of clientele solutions, advanced learning systems, symbolic cognitive reasoning, ML systems, and a few core functions of the OCTO. We have a separate product group that, for the most part, operates independently from our initiatives but will use IP building blocks from the OCTO.
How do you work in synchronization? What best practices have you developed?
This can be a challenge, as Beyond Limits’ has offices in Glendale (Los Angeles area), New York, India, Taiwan, Singapore, and Japan. We often share and matrix resources across a variety of client engagements. We have also recently acquired a few companies for supporting functions, such as Oak Consulting and Altec in the last few months.
Establishing a set of shared standardized values, common guidelines, and ways of working that we can all adhere to, facilitates our methodology and practices.
For example:
- While we try to lead by being technology agnostic, we do have preferences for certain technologies, which we align on internally.
- We continually get data drops from clients, of all different levels of maturity. So we need to consistently catalog and annotate information from our clients until we can set up a proper pipeline for automation and quality control.
- Experimentation, versioning, lineage, reproducibility, and testing are important for internal collaboration and how we deliver experiments, insights, and serve artifacts to our clients.
It’s easy to get into some bad habits when we are early in our cycles. Demonstrating value early, continuously progressing, and building that foundation facilitates validation of our direction and progress.
What does your MLOps stack look like?
We don’t have a standardized operating stack. We’re pretty flexible. Part of that is that I’m on the clientele side of it, but even in-house, our big thing is to be flexible. We either have integrations with other third parties and vendors, or we might have our in-house logic behind that. Or there might be times that we’re leveraging open-source frameworks and not using enterprise edition, and we basically need to build our own custom logic that sits on top of it.
Some enterprise companies say, “This is our standard stack we use across the board.” We don’t quite sit in that area. We do have our own preferences, but we tend to be more open.
Once your model meets the real world what is important to you to monitor and why?
As the “ML Product” shifts from the offline to the online environment interacting with live data, it can often exhibit different behaviors from the intended trained conditions and behaviors.
Further, these behaviors are not static — they are constantly evolving over time, and the prospective audience and content may drift further with fresh information. This requires proper validation, monitoring, and safeguarding across a variety of dimensions to ensure the model remains robust to inputs and conditions. As we execute different experiments on live feeds, our focus will extend from offline metrics to monitoring more business/domain business metrics and KPIs.
Exposing the model to synthetic and simulated environments, injecting mock dependencies or environments, or developing behavioral tests can facilitate the model behavior monitoring during conditions of uncertainty.
What are the top metrics that you look to gain visibility into model health?
Similarly, model health metrics depend on the use case, from both an offline and online context. Some of them are very traditional ones, like drifts and skews. But we also look at, “how complete is my data? And how strong is the evidence, and does my data support the prediction?”
One of the things we do in the knowledge area, we look at how to put weights on the data that you have depending on how complete it is.
There are other areas where there might be a long lagging period where we might look at proxies. These are a few of the themes I prefer to look at:
- Distributional Drifts and Skews: Data, feature, concept, model performance envelopes, training/serving skews, or behavioral unit and system tests
- Dimensions of Data: Completeness, weak vs. strong evidence, or traditional labeling and annotation that may require domain knowledge
- Proxies: Measures for tracking progress toward specific goals and outcomes, either due to those slow to change (lagging) or those difficult to measure in live environments
- Domain & SME Business Metrics: Equivalent metrics of AI safety heuristics index to measure the volatility of an asset or optimize the operating boundaries in controlling hardware components without damage
What does your resolution process look like? Are you a free agent, or do you need to get other teams involved (depending on the issue) to solve issues?
The OCTO is composed of multiple groups, one of which is the AI Solutions Team. We’re focused on custom client engagements, some of which revolve around our existing intellectual property that fuels our product line, while others are entirely new custom developments.
The sophistication of our projects varies as it transitions from pilot to platform/product and GTM. Some of our clients take this responsibility internally, while we are more deeply involved with others as the opportunity evolves. For the latter, we spin up expertise as needed, as we have some groups with specific specializations.
What keeps you up at night? What monitoring issues are challenging to detect and resolve?
I think this is different for many organizations, business units, use cases, and corresponding impacts. Take, for example, regulated industries such as healthcare, finance, and energy or mission-critical situations at NASA. These situations have a much greater risk than would recommendations in ad-tech, so the cost of error can be catastrophic and the utility of safety is critical. These themes are a bit more difficult to detect, as they tend to link towards more scenario design, modeling different “what-if” cases.
If you had all the resources and could put together your dream team, what machine learning model would you want to create and publish?
Dream Team, eh? I’m reminded of the NBA Olympic Dream Team or something out of the original Space Jam. I don’t think there’s a good answer for this — organizationally, architecturally, or a golden model, it’s very dependent on context.
However, from a personal perspective, I will say I have an interest in AI safety and curiosity exploration. Those tend to link toward opportunities in sequential, multi-step decision systems like DRL. With that said, I see opportunities to improve those areas by scaling with domain knowledge instrumentation. These dimensions have traditionally been viewed as a bit niche, so I’m looking forward to them becoming more widespread.
In what ML areas are you investing the most heavily to improve over the next two years?
While we do have our specializations, we don’t limit ourselves to a particular modality or industry. From a technology perspective, we will continue to invest in our reasoning offerings around cognitive AI. We’ll also continue to invest in complementary data-hungry based methods to mature our Hybrid AI, along with the supporting ML systems functions and platforms to scale the different data taxonomies and use cases, both native and custom.
This blog has been republished by AIIA. To view the original article, please click HERE.
Recent Comments