A team is not a group of people that work together. A team is a group of people that trust each other.
-Simon Sinek
Whether you are building an AI product or finding ways to incorporate AI into your organization, it's likely you will need an AI team. The planned design of an AI team will control what it can accomplish while the actual assembly and composition of its members will affect how much of that potential is reached. Building a team with staying power in an uncertain and dynamic environment is a tough challenge, which is why we rely on observations, principles, and understandings to determine team design. A big part of team selection comes down to understanding the various roles required and how those roles enable the success of the AI team.
There is an interesting history to DS/ML/AI teams, as the composition has changed over time. Initially they arose out of need, as the math required to solve various problems became too great for the majority of programmers. Consequently, organizations started building math heavy teams, under various names, to great fanfare. While these teams were able to solve difficult problems the majority of work didn't see the light of day. Some would say this is simply how research works while others couldn't see a justification to the cost. In actuality, the real problem was that these teams were not tasked correctly in solving problems that matter to the organization. Then the pendulum began to swing towards a more hybrid group of research scientists, applied scientists, machine learning engineers, etc that could create complex models, implement them, and write production level code to varying degrees.
The Function
The explosion of available AI tools and models makes it appear that things will swing a bit more towards the software engineering side. However, organizations swinging too far will have issues dealing with output oddities and model performance. In order to succeed in the emerging environment you need to think about how to build a team that can weather all of the unknown and unseen trials. This means understanding how to build an integrated team with new emerging roles.
An AI team is responsible for designing and creating the sensing, interpretation, decision making, and actuation engine behind a product. These engines are composed of multiple components, often in various states of maturity. Sensing components intake information or data feeds, which then feed to interpretation components to make the data usable. Decision making components determine how to respond based on the data, and actuation components take determined actions. These components fit into and interact with larger components within a product for the end consumer experience.
The state of any component progresses through three different stages: proof of concept, build out, and performance tuning. The proof of concept stage is an indication of whether the methodology or design of the component has the potential to achieve the required effect. The build out phase takes the proof of concept and makes it functional and usable by the rest of the system. In the performance tuning stage, optimizations are made so that things run smoothly and more efficiently.
When using the requirements above, the building of an AI team has several concerns that need to be accounted for:
Understanding what tech is available, what tech is coming, and what can be used
Determining how to combine available technology or build what doesn’t exist
Ability to implement new research
Ensuring that adequate quality data is available
Understanding of how the AI system will be used and how it achieves its goals
Providing safety measures to make sure the system doesn’t cause adverse issues
Roles
Below is a list of roles now emerging for building high performing AI teams. Note that these roles are based more on functions and traits and less on skills. Skills can be quickly acquired to serve the needs of the function. In a rapidly advancing field like AI, it's important to realize that continual skill acquisition/improvement is needed as certain technologies fall in and out of favor. As such, each of these roles are technical and each have degrees of software engineering, machine learning modeling, and data understanding. Each role is expected to conduct experiments, but where those experiments occur may vary. The important differentiator is how they function to help build the best system.
Modifiers: Responsible for manipulating components to get the desired output. A more classic example would be those who focus on data transformations and hyperparameter tuning in machine learning models. A more recent example would be prompt engineering. The focus is on getting the most out of each component within a system.
Connector: Responsible for figuring out how to combine different systems and components together to get the required outputs. There's a certain element of combination and creativity involved while requiring a system level approach. Even though it may look like a single interface from the outside, AI systems usually require the use of many different components to work effectively. These individuals understand how to choose and glue the right components together to get the desired outcome.
Investigator: Responsible for finding, understanding, and implementing the latest methods. These individuals are on top of research at the edge, whether through constant review or creation. They are responsible for making sure the team is at the forefront of what is possible.
Vetter: Responsible for assessing technology, methodologies, and approaches to make sure they live up to their claims. They provide the very useful function of avoiding paths that won't work. Think of them as pruning a path forward. They also help establish ways to validate and benchmark the components and system being built.
Optimizer: Responsible for finding ways to get maximum system performance both from models and hardware. Their goal is to get better outcomes more efficiently, as speed and compute add up when dealing with the scale of AI systems.
Curator: Responsible for maintaining components and data, and for finding ways to create/generate/acquire both. They are custodians of the available data, models, and other components. Additionally, they are alchemists that understand what data to feed into a system, where errors occur, and how to modify data for model improvements.
Mitigator: Responsible for reducing system level risks. AI is best used in an automated system and trusting such automation requires finding ways to prevent destruction from such automation. Some of the role may be around safety, ethics, and/or alignment, while other parts are around providing guardrails in automation.
A single individual may play more than one role and one individual may have abilities across multiple roles. Each role also has the responsibility of ensuring the system is performing as expected and meeting the desired goal. Thus, each individual should have a pulse on the overall context of everything being built. Each company will require a different combination of the above roles to achieve their desired outcomes.
This is just a starting point and you may have your own mental models. If you think any role is missing, please reach out. We'll be exploring each of these roles in depth in upcoming posts.