Machine learning can drive tangible business value for a wide range of industries — but only if it is actually put to use. Despite the many machine learning discoveries being made by academics, new research papers showing what is possible, and an increasing amount of data available, companies are struggling to deploy machine learning to solve real business problems. In short, the gap for most companies isn’t that machine learning doesn’t work, but that they struggle to actually use it.
How can companies close this execution gap? In a recent project we illustrated the principles of how to do it. We used machine learning to augment the power of seasoned professionals — in this case, project managers — by allowing them to make data-driven business decisions well in advance. And in doing so, we demonstrated that getting value from machine learning is less about cutting-edge models, and more about making deployment easier.
AI as project manager
Technology service providers like Accenture work on multiple software projects. A common challenge they often face is that problematic issues are discovered after the fact, which then call for post-mortem investigations to determine the root cause. This is a tedious task, and can become overwhelming when hundreds of projects are occurring at once. A proactive solution would save time and reduce the risk of the problems occurring in the first place. Our team decided to address this problem by finding patterns in a complex volume of data, building machine learning models, and using them to anticipate the occurrence of critical problems. We called our effort “AI project manager.”
An AI project manager acts as an augmentation tool for human project managers. Using historic data from software projects, it trains a machine learning-based model to predict, weeks in advance, whether a problem is likely to occur. As a test case, we decided to use the ML model we built to predict the performance of software projects against a host of delivery metrics.
Training the model
To train this model, we first collated historical data from the past three years across thousands of projects, comprising millions of records. The model identified red flags that might indicate an upcoming problem in project performance, including an increase in the average time spent resolving a bug, and backlog processing and resolution time. Most importantly, it was able to predict potential risks ahead of time – in our case four weeks ahead. This lead time allows service provider teams to determine the nature of the upcoming problem, identify the areas that would be impacted, and take remedial actions to prevent it from occurring at all. Basically, the AI project manager functioned as an early warning system that enabled human project managers to take on more valuable business tasks.
Once the model was delivered, the deployment team began applying it to incoming data previously unseen by the model. After observing steady performance across several months of data, we felt confident to use the model across several projects. Currently, the AI project manager (tested and integrated in Accenture’s myWizard Automation Platform across delivery projects) serves predictions on a weekly basis and correctly predicts red flags 80% of the time, helping to improve KPIs related to project delivery.
The next step for the project will be to use the same data to create models that can predict cost overruns, delays in the delivery schedule, and other critical aspects of project execution that are critical to the business performance of an organization.
Done Beats Perfect
As we built the ML model, we were surprised to learn that none of the most hyped data science tools — such as deep learning, AutoML, and “AI that creates AI” — were needed to make it work. In fact, they would not have helped us achieve our key goals. Instead, our biggest requirements were for a robust software engineering practice, automation that allowed domain experts to come in at the right level, and tools that could support comprehensive model testing.
Anticipating that other enterprises may benefit from these lessons, we have organized them into a new machine learning paradigm, which we call ML 2.0. The key steps in this framework are described in a research paper, and are supported bya suite of open-source software tools.
The four most important aspects of the new ML paradigm are as follows:
Speedy process: ML 2.0 helps users go from raw data representation to a deployed model in seven precise steps. Consequently, a four-person team was able to develop the proof-of-concept and deploy the necessary models within eight weeks. This would not have been possible under the old paradigm, which requires costly buy-ins, like one-off software built for discovery and complex algorithms whose benefit can’t be quantified.
Greater involvement of domain experts: Domain experts determined key variables — for instance, which specific events posed a risk to project performance, how far ahead the model had to be able to predict for the information to be valuable, and which past projects should be used to train the model. ML 2.0 provided domain experts with a prediction engineeringtool which enabled them to set key parameters and ensure that the model would generate business value.
Automated feature engineering: A vital part of the machine learning process is feature engineering, which involves using domain knowledge to extract patterns, or features, from raw data. Domain experts are often better than machines at suggesting patterns that hold predictive power — for example, an increase in the average response time for a ticket could eventually lead to poor project performance; but, automated software tools are needed to actually calculate these features. We used Featuretools, a DARPA-sponsored open-source library created by Feature Labs, where three of us work. The tool recommended 40,000 patterns, which domain experts narrowed down to 100 of the most promising.
Intelligent model-testing: Like most domain experts, software project managers needed to put their new models through multiple rounds of validation and real-world testing before they were confident enough to deploy them. The automated testing suite built into ML 2.0 gave the deployment team the flexibility to simulate previous states of the data, add data that had been withheld from the development process, and conduct their own tests for several points in time. This included real-time testing when it came time to deploy.
The ability to anticipate is a competitive advantage
If companies are going to get real value from machine learning, they need to focus not just on technology, but on process. Machine learning experts, for their part, need to realize the gap between cutting-edge science and organizations’ ability to actually implement working models aimed at real problems. Closing the implementation gap will require a new approach to machine learning, with plenty of interesting technical problems of its own.
ML 2.0 helps transform the potential of machine learning into tangible business results by putting machine learning at the core of a business function rather than treating it as a separate R&D initiative. Doing so has a direct influence on how organizations run their business, how they can create new revenue streams, how they can re-imagine their products and services, how they can increase operational efficiencies, redefine their workforce, and much more. Today, businesses don’t just want to have answers to questions like: did we meet our sales target for this quarter? Did we reach our target audience? Did our advertisement spend meet its objectives? Instead they want to know what is likely to happen in the future. They want to make data-driven predictive decisions, quickly and easily, which is the promise of ML 2.0.
Article from: hbr.org