The AI Infrastructure Alliance brings together the tools data scientists and data engineers need to build a robust, scalable, end-to-end, enterprise artificial intelligence and machine learning (AI/ML) platform.
Right now AI/ML software sits squarely in the early adopter phase of the technology adoption curve. As companies and researchers work to create solutions to the unique problems of data science, it results in a massive proliferation of tools that creates tremendous confusion in the marketplace. Enterprise organizations everywhere are struggling to stitch together dozens of different tools to create a complete AI/ML platform.
The Alliance and its members bring clarity to this quickly developing field by highlighting the strongest platforms and establishing clean APIs, integration points, and open standards for how different components of a complete enterprise machine learning stack can and should interoperate. That lets organizations make better decisions about the tools they’ll deploy in the AI/ML application stacks of today and tomorrow.
In the coming years, we expect a strong, Canonical Stack (CS) of machine learning software to emerge from the competition and the Alliance will stand at the forefront of bringing it to fruition with research and our member’s work on solving the biggest challenges in this space. When a true CS forms in the marketplace, it creates a rock solid foundation for future software to build on, letting developers and researchers move up the stack to solve bigger, more challenging and more rewarding problems.
There will not be one tool that does everything in this dynamic space. The CS will combine a number of key pieces into a unified whole, with clean and standardized API boundaries between them. This alliance’s aims to foster collaboration and interoperability between leading MLOps tools to allow a CS to form more quickly and effectively.
Open Source and Open Core
We’re strongly favor open source and open core software. The reason is simple. If software is locked to a single SaaS solution or Cloud Provider it never grows ubiquitous enough.
A good example is Kubernetes. It’s become the standard for container management at scale, forming the commercial foundation of many software companies solutions from Google to VMware to Red Hat. If Kubernetes only ran on Google Cloud it would never have formed the rich and robust ecosystem it is today.
We favor tools that can run on any platform, on-prem or in the cloud.
Pure open source and open core models have driven the vast majority of true innovation in the last decade. Where the proprietary model once dominated software development, today innovation starts in open source.
Almost all the major tools driving the AI/ML revolution are open source, from PyTorch and Tensorflow, to the Python programming language and it’s vast ecosystem or key libraries for the data science community, like pandas, numpy, and sci-kit-learn. We expect the trend to survive and thrive as the CS in AI/ML comes together.
However, the AIIA is not strictly restricted to open source, open core companies and projects because many SaaS solutions will deliver economies of scale now and in the coming years. It may cost millions of dollars to train a cutting edge model using IaaS and your own training stack, but a training company that can deploy dedicated hardware could easily allow companies to train that model for a fraction of the cost. We remain open to all companies and projects in this space as the challenges data science teams face are still in flux and we need flexible thinking to solve them.
Finally, we are an open organization. Competition is encouraged, so membership in the Alliance is not limited to a single company or software platform that serves a single purpose. We are open to friendly cooperation between rivals in a clear and transparent way, as long as organizations are committed to growing the Alliance and its ecosystem together.
Core Values and Objectives
The AI Infrastructure Alliance’s primary mission is to help organizations:
- Community – Build bridges to communities wherever the conversation is happening.
- Trusted Journalism Advisor – Be a trusted advisor to journalists everywhere covering AI/ML.
- Micro-Alliances – Build interoperability between smaller groups of partners in the Alliance.
- Reference Architectures – Define and frame the key components of the AI/ML canonical stack.
- Technical Projects – Host projects that promotes interoperability and reusability between stack layers in AI/ML.
- Events – Create and host events that showcase the members and their platforms.
- Education – Build certifications and educational materials for members.
- AI Ethics – Define a practical AI ethics framework the members can adopt as best practices.
Connect with Us