There is a surprising parallel with well-documented systems development approaches when evaluating the execution and performance of an AI project. However, there are methodological differences when engineering an AI solution because there are aspects of its solutioning that must be given more attention to ensure successful results. As you read on, consider the “system development methodology” to include resources, activities/steps, project management, sponsor oversight and the organization’s Enterprise AI Strategy.
The initial experience documented from evaluating AI project execution performance and overall “success” identified a set of “learnings” that are somewhat familiar to all systems/solutions development professionals. The emphasis or importance of one aspect of the methodology may be more or less, but the foundational principles to ensure an AI solution’s development success share common attributes with past and current well-accepted system development methodologies.
Below are common system development methodology success attributes applied to AI solution development (hopefully they sound familiar):
1) Scope & Expectation-Setting. The use cases must be well scoped, not trying to “boil the ocean”, and be specific as to what the AI will do and what it can or cannot do
2) Expected “Deliverable”. There must be a clear distinction if the effort is a “demonstration” project, or the basis for an AI solution that may or may not become an MVP (Minimum Viable Product) and the foundation for a broader set of decision-support capabilities and/or additional process automation capabilities
3) One Team. Both sides need to be understanding of the other’s experience considering where the AI “learning curve” position is for the sponsoring organization
4) Business & AI Teaming Dynamics. AI engineers – including data scientists - must understand that use case requirements are based upon business domain expertise which defines data and analytical requirements to design and build the AI Solution. There needs to be mutual respect and trust between the business user/domain expert and the data scientist/AI engineer to realize this working relationship
5) Data Assessment. Data needs to be “explored” and assessed for its availability to meet use case requirements before actual data modeling begins. Solution expectations (i.e., use case) may need to be re-defined until the required data is available, complete, accurate and unbiased
6) Flexibility. Both the business organization and the AI Solution Engineering Team must be flexible when issues arise. Examples include requirements that are proving difficult to meet; data availability, accuracy and/or reliability is not possible at this time and the use case must be revised; and/or completion of certain tasks/activities are impacting the targeted milestone dates
7) Acceptance Criteria. There must be a defined set of acceptance criteria that also defines user testing, training and acceptance mutually agreed to before AI solution engineering begins
8) Strong Oversight & Program/Project Management. This is a requirement for ensuring the successful execution of any business and technology project. However, because of the activities and significant importance of key project elements – like the availability of required data – along with differences in experience levels of business and AI engineering teams while working together, strong project management is critical to address issues like changes in scope, to manage and mitigate risks and unplanned events, and to ensure a reasonable level of value realization when utilizing a new and innovative technology like AI