Artificial Intelligence, deep learning, machine learning — whatever you’re doing if you don’t understand it — learn it. Because otherwise, you’re going to be a dinosaur…
—Mark Cuban, American billionaire entrepreneur
Succeeding with AI in SAFe®
Note: This article is part of Extended SAFe Guidance, and represents official SAFe content that cannot be accessed directly from the Big Picture.
The Growing Influence of AI
Every day, more and more aspects of our personal lives are aided and supported by intelligent systems. Applying for a loan, flying on an airplane, shopping in an online store, and scheduling a doctor visit are just a few of the limitless examples of daily activities that use Artificial Intelligence (AI). Further, consumers are not the only beneficiaries of AI-powered solutions. Enterprises use AI to understand their customers better and create better products and services. Banks use AI to detect fraud and money laundering. Governments provide services to their citizens using AI. Even military and national security systems have incorporated this emerging technology.
The trend is clear:
- A startling 72% of executives believe that AI will be the most significant business advantage of the future .
- According to a PwC survey performed in 2020 , 86% of respondents said AI would become a “mainstream technology” at their company in 2021.
- By 2025, 70% of organizations will have operationalized AI architectures .
A couple of critical factors have accelerated the rapid growth of these ever more intelligent technologies. Significant advancements in general-purpose hardware, cloud technology, and optimized infrastructure, such as Graphics Processing Units (GPUs) , provide unprecedented computing power that supports a different level of magnitude of AI capabilities. Concurrently, new learning mechanisms and AI architectures have emerged that expand the frontier of business problems solvable by AI. In addition, the COVID-19 pandemic forced many enterprises to accelerate their multi-year digital transformation plans into a timeframe of weeks or months, thus bringing a robust big data foundation to a broad range of AI-powered business opportunities. As a result, according to Gartner’s research, over 80% of Global 2000 companies have launched AI initiatives. Many of those solutions are in production and the market today.
It is increasingly evident that every organization must seriously evaluate and ultimately leverage AI capabilities in their next generation of products and services to stay competitive.
This article provides a foundational context for applying AI in the enterprise and how SAFe can accelerate the successful adoption of this advanced technology. Key topics covered include:
- AI as a Competitive Advantage highlights the nature of AI’s business value and what is required to build a sustained advantage over the competition.
- Applying AI to Achieve Better Business Results outlines typical application opportunities for AI within both operational and development value streams and capabilities that can be directly embedded in the products and services delivered to the end-user.
- Understanding the Fundamental Types of AI describes common AI concepts and architectures that both technology and business leaders must understand to apply AI within their enterprise.
- Success Factors for AI Initiatives with SAFe describes essential practices and a decision framework for successful AI initiatives with SAFe.
AI as a Competitive Advantage
AI opens various possibilities to extend existing solutions and make them more valuable and scalable. More importantly, it opens a frontier for new solutions and qualitatively distinct capabilities that benefit the customer and the business in a new way.
But where exactly is AI most helpful?
‘Conventional’ software solutions are good at addressing well-understood problems; such solutions execute a limited number of scenarios based on a predefined list of rules. However, many organizational workflows and customer scenarios involve parameters that can’t be accounted for via conventional, preprogrammed means. AI addresses many of these most complicated scenarios and turns them into viable business opportunities. The landscape of potential AI applications within an enterprise is broad and diverse (Figure 1).
For example, a bank may utilize AI to automate customer service, identify suspicious account activity, understand customers’ needs better and offer appropriate products, extract essential insights from customer feedback and social media, facilitate regulatory compliance, and so forth.
As specific AI applications become more commonplace, companies are pressured to move beyond basic AI solutions and seek new opportunities where complex problems exist that can be solved with the help of AI and can generate novel business value ahead of the competition .
But creating successful AI solutions is no easy task, and while it is still software, the nature of AI software development is fundamentally different. Many organizations don’t know how to effectively leverage an AI opportunity even when they devote significant resources to the problem. SAFe provides an operating model that helps identify opportunities, validate new capabilities, and translate them into valuable, working solutions that benefit the customer. The first step is to understand the distinct ways AI helps improve an organization’s outcomes.
Applying AI to Achieve Better Business Results
Generally, an organization’s opportunities for leveraging AI lie in three areas (Figure 2):
1. Increasingly Intelligent Customer Solutions. More and more intelligence is embedded in the products and services we access every day. Common examples include self-driving vehicles and driver-assisted capabilities, facial recognition on our phones, personalized product recommendations, and smarter devices powering the Internet of Things (IoT). In addition, many AI-powered functions are now directly embedded in the software applications that we use on our mobile and desktop computers.
2. Improved Operational and Development Value Streams. Organizational processes are ripe for AI applications. In this scenario, AI is used to make an internal enterprise process more productive or to introduce qualitatively new capabilities within the process. Intelligent capabilities can be applied to both Operational and Development Value Streams. Inventory and demand management, personalized consumer experiences, and fraud detection are just a few examples of how AI can power operational workflows. In the case of development value streams, AI can be used to navigate the solution context, analyze production data, identify optimal parameters for user scenarios, and facilitate effective testing.
3. Customer Insights. AI can help organizations identify new business opportunities, learn more about the customer, and extract market insights that create entirely new offerings on an even broader scale. In this latter case, AI supports the launch of new initiatives via the BAVS (the Business Agility Value Stream) that were otherwise undiscoverable.
Understanding the Fundamental Types of AI
Artificial intelligence (AI) is a wide range of smart machines capable of performing tasks that typically require human intelligence. The potential applications represented by AI are extensive and affect almost every facet of business and consumer life. Many of today’s AI systems are based on the concept of Machine Learning (ML). ML-based solutions are designed to improve based on experiences and data. However, some AI architectures do not involve Machine Learning and instead are based on a comprehensive set of static rules that encode some complex reasoning. Figure 3 provides a typology of various AI and machine learning approaches. It also illustrates some of the capabilities that are enabled by these technologies. Note however that some of the capabilities illustrated can be built with more than one AI approach, and a combination of approaches may also be applied.
As shown in this graphic, there are three major categories of ML, differentiated by how the learning is achieved. All three approaches involve three primary components: the data, the learning algorithm, and the learning model, as Figure 4 illustrates.
Most generally, Machine Learning can be described as one of four types, as described below.
Supervised learning utilizes training data to teach the model how to produce the desired output (Figure 5). The training data must contain the inputs and the desired outputs as labels. The learning algorithm runs inputs through the model, compares them with the labels, and computes the model output.
The algorithm adjusts model parameters and repeats the process until it reaches a sufficiently low number of errors. It is called supervised because the desired outputs are supplied alongside the inputs and are used to ‘supervise’ or ‘guide’ the learning process. Unless the data includes both the inputs and the labels initially, it requires a ‘labeling’ process before training the model.
Supervised learning can help detect known patterns (fraudulent transactions, spam messages) and data categorization (image recognition, text sentiment analysis). In some instances, the output data may be readily available or easily attainable in an automated manner (such as a customer name alongside the profile photo for face recognition or a five-star rating score next to the product review text for sentiment detection; the situation often referred to as self-supervised learning). Identifying such facets of data opens excellent opportunities for applying supervised learning to organizational processes.
Unlike the previous approach, Unsupervised Learning does not utilize any feedback mechanism. Instead, it extracts valuable information merely by analyzing the internal structure of the data.
Unsupervised learning has a significant advantage because the input data doesn’t need to be labeled, allowing the learning algorithms to use vast volumes of data. This supports easier scaling of the unsupervised learning-enabled capabilities.
This type of AI algorithm is mainly applied to data clustering, anomaly detection, association mining, and latent variable extraction tasks. These processes partition the data by similarity and establish existing relationships within data to be used by other solution capabilities or functions. Some common use-cases of such tasks are customer or product segmentation, similarity detection, recommendation systems. Unsupervised learning can also be leveraged as a link in a broader chain of a supervised learning process to extend data labeling to unlabeled datasets.
Reinforcement Learning is similar to supervised learning because it also involves a feedback mechanism that verifies the model. However, the feedback does not rely on labeled data in this case. Instead, the system acts in a particular environment and is supplied with a reward function that helps the model learn what action leads to successful outcomes. Therefore, the learning algorithm generates exploratory action and selects scenarios that lead to the highest reward.
Reinforcement learning finds applications in robotics, gaming, decision support systems, personalized recommendations, bidding and advertising, and other contexts where simulated exploratory behaviors can be evaluated in terms of their value.
Deep learning is the label for machine learning models based on artificial neural networks (ANN). Deep learning can be effectively applied to supervised, unsupervised, and reinforcement learning and, in many practical tasks, has produced results comparable to or surpassing human expert performance. An artificial neural network is loosely modeled after the structure of neurons in the brain. An ANN has inputs, outputs, and consists of a connected set of neurons. An example of such a model could be a neural network that accepts pixel colors in an image as an input and determines what type of object is in that image as an output.
Every connection has a specific weight that either strengthens or inhibits the signal. When all the connectors leading to a particular neuron convey a sufficiently strong cumulative signal, the neuron activates and transmits the signal to other neurons further downstream.
A neural network with multiple hidden layers is called a deep neural network and is the foundational architecture for deep learning.
Success Factors for AI Initiatives in SAFe
Developing and delivering successful AI solutions is a task challenging to many organizations. The following factors are critical in establishing a productive solution development process for AI systems.
Apply a Clear AI Decision-Making Framework
A significant number of AI initiatives fail to deliver the better results promised when investments in this technology are pitched. This is often caused by poor decision-making around how and why AI will be used. Organizations often want to ‘do AI’ because ‘everyone else does,’ without understanding the effort required to operationalize and scale this technology, its impact on the organization, or even if it will produce the intended benefits.
SAFe organizations already have powerful tools to make better decisions on the appropriate use of AI. A few of these are highlighted in Figure 9.
1. Alignment with strategy ensures that AI initiatives pursue beneficial outcomes for the business. Aligning the AI roadmap with Portfolio strategy is an essential step in this direction. Some AI initiatives may require more financial support; others may need to be repurposed or canceled based on a progressively assessed economic viability. Productively managing the spend requires Lean Budgeting. Using the Portfolio Kanban System and a Lean Business Case helps establish better alignment with strategy. Additionally, PI Planning provides the foundation for recurrent alignment of AI strategy with the actual implementation.
2. Customer centricity is critical to ensuring that an AI initiative is actually solving a customer problem. For example, intrinsic AI capabilities (such as image recognition or natural language processing) must be appropriately integrated into a favorable customer scenario. Explicitly defining the customer problem is an important step and hugely benefits from applying Design Thinking to AI capabilities.
3. Continuous exploration paves the pathway to a successful AI-powered solution. Solution development always contains a significant degree of uncertainty. In the case of AI, however, the level of uncertainty is exceptionally high in creating the right solution and implementing it correctly. This is where the SAFe Lean Startup Cycle is very useful. Creating a clear business hypothesis, building an AI MVP, and validating it against suitable measures are at the core of successful exploration.
4. Empirical milestones guide the development of a successful AI solution. AI capabilities need to be continuously integrated with the rest of the solution throughout the incremental development process. Solution increments are used to elicit important customer feedback.
Organize to Deliver AI Solutions
Organizing around value is a critical enabler of flow and is one of the SAFe operating principles. In the case of AI-powered solutions, it has specific implications for SAFe organizations. We’ve observed three stages of maturity and organization evolution as Figure 10 illustrates.
Stage 1. The first step is to build a critical mass of AI expertise. This usually includes hiring and developing skills in AI and ML, and the accompanying skills in data science, data analytics and data engineering. This centralized team engages with subject matter experts and may take on building and deploying some initial proof of concept, pilot projects, or even an MVP for an anticipated AI initiative. This seeds the enterprise with AI thinking and starts to show the potential for extending the range and types of AI applications.
Stage 2. It quickly becomes apparent that these new kinds of new capabilities need to be developed and integrated into the enterprise’s product offerings. This requires the engagement of the ARTs who are responsible for enterprise solutions. As a result, the focus of the central AI team moves from developing AI capabilities themselves to enabling ARTs to do so. At this stage, the team may embed some subject matter experts on board each train to streamline the process. Typically, the centralized function also continues to develop certain types of AI capabilities, particularly those that cross value streams and cause the creation of more mature, enterprise-level data.
Stage 3. With time, more AI functions get embedded as a part of the Agile Release Trains. This helps manage integration risks, establishes a more robust delivery pipeline, and fosters critical knowledge. The decentralized effort, however, typically benefits from some ongoing centralized expertise, including advancing the science generally, implementing portfolio and enterprise cross cutting initiatives, and providing organizational governance that ensures proper management of customer data and the ethical and business implications of AI in the marketplace.
Across all three stages, the AI group must involve both engineering and business professionals. This is vital for the organization to be able to identify and leverage productive opportunities for AI-enabled solutions.
Proactively Manage Data
Data is a critical link in most AI implementations and, when handled poorly, leads to a failed AI initiative. Every organization must establish an effective process around data management.
Data is critical to AI in a couple of ways. First, most AI systems require a training dataset that must be large enough—and thus referred to as Big Data—for the AI model to learn to perform its task correctly. For example, a fraud detection system may require tens or even hundreds of thousands of transactions until it learns to recognize the fraudulent ones amongst them reliably. And to validate whether the system was successfully trained, a whole separate dataset is needed – the testing dataset that must be readily available.
But the great challenge does not only come from the size of big data. An even more significant challenge is that data seldom exists in a ready-to-go, consumable form. Data is often dispersed across multiple, disjointed data sources within and outside the enterprise. Additionally, the actual format of the data may be unconsumable. And on top of it, it may lack some critical attributes without which training is impossible.
Any AI approach will depend on a portfolio strategy that recognizes that a systematic approach to acquiring and normalizing big data is virtually always the precedent for an AI application. This includes validating assumptions about data as a first order of the business in every AI initiative. This is so material to AI that a separate article (coming soon) is devoted to the subject. Figure 11 illustrates some of the main elements of how SAFe guides enterprises on this parallel journey.
Build Organizational Competency Around AI
Artificial Intelligence represents a novel type of technology to many organizations. And even those companies that have some AI initiatives in flight don’t have the right expertise applied in the right place. AI is a complex technology that requires a good understanding of AI capabilities and potential business applications’ landscape. Navigating critical opportunities requires business and technology people in an organization to work collaboratively to develop AI competency.
SAFe emphasizes the importance of the constant interaction of business and technology professionals. This is achieved through multiple means, including strategic effort around the portfolio kanban system, events like PI planning, System and Solution Demos, and Inspect & Adapt. Additionally, a range of continuous exploration activities, design thinking, and work definitions leverage the overlap between business and technology representatives. In the case of AI, all these techniques need to be utilized even more rigorously as disconnects may be very costly.
Establish an AI Solution Path
Driven by the flow of Business Agility, an AI solution advances through the following logical steps:
Define – This step involves identifying the business opportunity and understanding the customer’s need. The initial assessment determines how suitable AI architectures could support the business need. Both technology and business stakeholders are closely involved in this step.
Pilot – The step usually involves building a Minimum Viable Product (MVP) for AI capabilities and proceeding further based on the learnings from the MVP. It is critical to ensure that the MVP solves the business problem rather than simply demonstrating technical feasibility. The basic level of AI capability monitoring and measurement is applied at this step. Pilots should be designed to get to an MVP that can test the hypothesis of the AI capability as quickly as possible.
Operationalize – At this stage, the AI capability—if proven viable in the previous step—gets designed and implemented in a way that allows for full integration with the existing solution ecosystem and provides full-fledged support for required business scenarios. Operationalization is expensive in solution development and may lead to significant waste if the MVP does not validate the viability of integration with existing systems. The organization must also plan for the investments required to monitor, adjust, and retrain AI components continuously.
Scale – An AI capability is successful if it grows to support increased volumes of customers. In the case of AI—quite unlike conventional solutions—scaling introduces some unique challenges. So, with increasing processing volume, the actual business parameters, the data, and the AI model may grow out of alignment (something called “drift” in AI terminology). Progressive adjustment of the model, the learning algorithm, and data processing is often required to sustain the model through scaling and beyond. Additionally, as a result of scaling, some qualitatively new scenarios may emerge or be subsumed by the solution. This may sometimes lead to a significant update to the AI capability or even a drastic change to the ML model or the learning algorithm.
Govern – Enterprises are often presented with multiple opportunities to apply AI and can do it in different areas. Over time, an ecosystem of intelligent capabilities emerges across various solutions that operate in the same pool of enterprise big data that establishes potential interoperability to create higher-level enterprise value. This requires a governance process that spans multiple solution trains in the portfolio or even multiple portfolios in the enterprise. This governance process can be a part of the general portfolio and enterprise governance. Still, it must especially account for some areas that grow far more important with the advent of AI. Such areas include effective data management, privacy and security, computing power, the bias problem, and traceability. Ethical implementation of AI is a dominant topic in this domain and requires serious investment by organizations leveraging this technology.
Due to the inherent complexity of AI solutions and lack of adequately established practices supporting AI development and operation, many organizations experience serious challenges driving their AI solutions through operationalization, scaling, and proper governance, significantly reducing the number of initiatives that produce any business value. That is why an organization must establish a productive solution path that fosters rapid feedback and effectively uncovers and manages technology and business risks.
Succeeding with AI is critical to surviving and thriving in the Digital Age. SAFe provides various tools that enable the successful development and delivery of AI-powered solutions. By organizing the teams and trains to incorporate AI capabilities, using practical, strategically sound decisions, and proactively building up the necessary cloud and big data capabilities, enterprises create the foundation for a productive AI effort. It is essential that technology and business professionals collaboratively develop competency around AI and understand where Artificial Intelligence can help the organization seize critical business opportunities.
Learn More PwC. The Macroeconomic Impact of Artificial Intelligence, 2018. https://www.pwc.co.uk/economic-services/assets/macroeconomic-impact-of-ai-technical-report-feb-18.pdf  PwC. AI Predictions, 2021. https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html  Gartner. The 4 Trends That Prevail on the Gartner Hype Cycle for AI, 2021. https://www.gartner.com/en/articles/the-4-trends-that-prevail-on-the-gartner-hype-cycle-for-ai-2021  Google. Using GPUs for Training Models in the Cloud. https://cloud.google.com/ai-platform/training/docs/using-gpus  Iansiti, Marco and Karim R Lakhani. Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World. Harvard Business School Publishing Corporation, 2020.
Last update: 26 January 2022