Blog

We believe there is something unique at every business that will ignite the fuse of innovation.

Data Rich. Analysis Poor.

Machine LearningMachine Learning is dramatically changing the mobile space for our enterprise customers. Apple’s announcements at WWDC have now pushed ML from the server to the user’s device. The implications are huge and the opportunities are real.

But before diving into what Machine Learning on iOS might mean for our enterprise customers, let me take a little detour…

Way back in the 1990’s, I was chasing a Ph.D. in Economics at George Mason University. As you might guess, my fellow grad students and I spent a lot of time banging around ideas for our dissertations. We all knew the horror stories about "the grad students who never leave" and we definitely didn’t want that to happen to us. So I would pick up a dissertation idea, toy with it for a few weeks, then toss it away after it seemed either too hard and too broad, or, too narrow and too trivial.

Then one day it hit me, "The guy with the best data wins!." I reasoned that if I could come up with some cool data that nobody else had, the dissertation would practically write itself. First I would get the data, then build a model, then generate statistically significant results. Boom. Dissertation Done.

Then I realized that a big university like George Mason had to be swimming in data. So, after a bit of thought, I naively marched into the Dean of Arts and Sciences and asked him to give me the grades for every undergraduate over the last five years. Oh, and please throw in their SAT scores and the salaries of the professors who taught them. And you know what, he did it, after a bit of data cleansing, of course. And I was able to fairly quickly crank out a dissertation that statistically demonstrated that the more you pay professors, the tougher they grade. Even after controlling for department, SAT scores, and lots more, the effect remained. It wasn’t a huge effect, but it was statistically significant.

That dissertation didn’t exactly write itself, but it did teach me a valuable life lesson about the value of good, clean data.

In Heat (1995 Movie), Kelso explains how he grabs data

Another story. There’s a scene in the 1995 classic Heat where Robert De Niro’s character is looking for one last bank to rob before heading off to retire in the South Pacific. And of course, he finds a data guy! There, on the porch of what looks like a log cabin, there is ‘Kelso’. Kelso is sitting in a wheelchair and flipping through a bunch of computer printouts with info on the best bank to rob, and the best day of the week and time of day to rob it. De Niro’s character is incredulous, "How do you get this information?" he asks. The data guy replies, "It comes to you, this stuff just flies through the air, they send this information ‘beamed’ out all over the … place, you just got to know how to grab it."

That was 1995!

Fast forward a few years, and now I’m teaching economics. I’m talking to one of our adjunct professors, a former VP at Wal-Mart. Remember, at this point Amazon is just selling books; Wal-Mart is regarded as the undisputed king of supply-chain management. "The second you buy something at Wal-Mart, somewhere a truck leaving a dock loaded with the replacement so the shelves are always full," they used to say. But my professor pal wasn’t nearly as confident in Wal-Mart’s data-based prowess as I expected, though it wasn’t for lack of trying. "Wal-Mart is data-rich and analysis poor," he used to say.

Nowadays, everybody feels like that. Data Rich. Analysis Poor. And that, precisely, is the problem that machine learning tries to address. The guy with the best data still wins, but only if he can uncover the secrets it contains, only if he "knows how to grab it."

Does the Model Matter?

Machine Learning is not magic. Behind the scenes lurk the same statistical techniques and algorithms you might remember from college. The difference between ML and the mostly impenetrable stuff in that fat statistics book is that computers have now gotten smart enough to interpret statistical results, and they can adjust the model accordingly. And they can do it very, very quickly.

Let’s start with a simple example.

In this case, the hypothesis is that there is some relationship between X values and the Y values. Statistical methods infer the a and b values so that when presented with a new set of X values, a new Y value can be predicted. The ε at the end is the error term. In a good model, the value of ε should be relatively small and randomly distributed around zero.

Something exactly like this is going on when you type a message into your phone. iOS uses a statistical model to generate predictions about the correct spelling of the current word you are typing as well as predictions for the likely next word you will want to type. But does the next word you want to type depend on the last word you typed, or the last five, or the last ten? Does it depend on the time of day? Does it depend on location? Does is matter whether the user is at home or away? In the North or in the South? Orientation of the device? Etc. Etc.

Where would a graduate student even start on a problem like that?

The difference between machine learning and simple statistical estimation techniques is quantitative rather than qualitative. But it is a big quantitative difference. When that Wal-Mart exec was complaining about being data rich and analysis poor, he was thinking of real living statisticians carefully segmenting data, dropping, adding, and deriving variables, and estimating models. One at a time. But with machine learning, all of that —dare I say it— craftsmanship has been automated. Instead of looking for the one model that explains the data, machine learning estimates many models within the data, to derive predictions with business value.

Machine learning always starts with a wide and deep rectangle of data that includes every conceivable causal variable. When we know what the user actually typed, we can train the model with that set of data. This training can take significant computational power. But once the model is trained, it becomes computationally efficient to feed in keystrokes to estimate the best word completion suggestions.

Over time, as more data is gathered, the model can be improved. This is the learning part of machine learning. As we learn more of what the user actually typed compared with what the model predicted, we can steadily add precision, or better yet, personalization to the predictions we make.

And the tools have evolved that make machine learning far more accessible. Just last summer, CapTech’s new college hires tackled a machine learning problem for a local client in Richmond. None of them had touched ML before coming to CapTech. But with a little bit of coaching from our data scientists, they were able to produce, in just three weeks, a passable prediction tool that produced some insights and some raised eyebrows from a client with years of experience exploring the same data the old fashioned way.

Now machine learning is everywhere from Alexa to Netflix to Google ads. And the various flavors of ML models continue to evolve — from neural nets, support vector machines, decision trees, clustering, and others. Building these kinds of models now requires a sort of meta-craftsmanship compared to the simple challenge of predicting student grades based on SAT scores, salaries, departments, etc.

Apple Announces Core ML

At WWDC this year, Apple introduced the Core ML framework. The two key videos to watch are Introducing Core ML and Core ML in Depth. While developers at the conference reacted with great excitement to the news, it quickly became clear that there are serious limitations to the technology.

Machine learning does not take place on the mobile device. Models still must designed and trained on a server. The trained models can then be imported into an Xcode project. From here, it is a simple matter to derive results on the user’s device.

Core ML is a foundational technology. Most developers will never touch it directly. For image-based apps, they will use the new Vision Framework. And for text-based apps, they will use the new language processing features in Foundation, particularly those for named-entity recognition.

Since the announcement, some of the limitations of Core ML have been uncovered and discussed. Oleksandr Sosnovshchenko covers most of these here.

  • Once a trained model is added to an app, the only way to update the model is to update the app.
  • Core ML supports a limited range of ML model types: regression and classification.
  • There are many other libraries that overcome some of these limitations

Nonetheless, the Core ML announcement is important because it represents a direct challenge to developers to make apps that are more responsive, more fun, and more intuitive to use.

At CapTech, as part of our Mobile Research Initiatives, we are currently experimenting with both the new natural language processing features and the vision framework. Both seem to offer great possibilities for enterprise application developers. For example: Could the Vision framework be used to recognize a check for mobile deposit? Could natural language processing trigger a voice search for products offered by a big box retailer?

What Does the User Want to Do Next?

All this leads back to the beginning of this post. "The guy with the most data wins." Well, in mobile app development, where is all the data? It’s in the analytics, of course! And if there ever was a case of "Data rich. Analysis Poor" it is in the mountain of app analytics data we routinely send up after most every swipe, tap, and scroll.

Product owners like their analytics data. It gives them a good feel about how users navigate their app. It shows them which features work, which features and ignored, and which features produce the most errors. More sophisticated product owners use analytics to drive A|B testing with their users. Features are toggled on and off based on responses from the server.

By its sheer volume, however, app analytics data rarely gets more than a causal scroll. But machine learning offers real possibilities here. If properly implemented, machine learning ought to be to answer the most important question faced by app architects:

What does the user want to do next?

At the level of the operating system, it is easy to see how useful this can be. Given my location, given the time of day, given how long it has been since i last unlocked my phone, given the last few apps I have used, what am I most likely to want to do next? Is it send a text to a friend, is it checking maps or opening a financial app, or maybe setting an alarm or timer? Machine learning on iOS goes far beyond Siri. The days of mechanistic notifications are coming to an end. iOS 11 shows that we are getting closer to the happy moment of truly useful notifications.

The same predictive analytics should equally apply at the app level. When I open my banking app, for example, what actions am I most likely to take. Do I tend to review transactions, make a transfer, or pay a bill? Many of those actions require many taps, swipes, or scrolls. If there are patterns in the analytics data, machine learning should be able to find them. Even if they are highly individualized patterns, machine learning models may be able to classify users in appropriate buckets and make suggestions in the same way that NetFlix makes suggestions about the next shows a user might wish to watch.

Next Steps for ML-ready Apps

In some ways, things haven’t changed that much since the 1990’s. In many ways we are still data rich and analysis poor. What machine learning on mobile offers is a way to get a bit more analysis-rich.

No doubt it will take some time to prepare apps for the machine-learning future. But there are three steps that can be taken now to lay the groundwork for truly useful machine learning future.

Tighten Up Your Analytics.

Enterprise apps gather analytics as a matter of course. Developers know how to call the appropriate utility functions to record button taps, screen transitions, network failures, etc. Now is to the time to review what is getting sent back to the server. Are we gathering all the necessary data that will help us tell the story of what the user is trying to do and, more importantly, gathering all the necessary data about what the user might want to do next. The sequence of user actions is just as important as the actions themselves. Our analytics logging must be able to capture that on a per-session basis. The goal should be to reduce the number of taps and scrolls necessary to get users where they want to go.

To be sure, data scientists will still be needed to develop the appropriate models from the captured analytics. But as our college new-hires demonstrated last summer, this model-building enterprise may be less daunting than it first appears. As the name suggests, machine learning means just that. Just moving a bit closer to what our users want is a big step in the right direction.

Make Your App Routable

While Apple’s development environment has vastly improved over the last few years, the IDE still tempts developers into two much hard-coded navigation from one feature to another. This kind of architecture will make it very difficult to implement the predictions that machine learning might offer. Instead of hard-coding segues from one view controller to another, a routing engine of some type will be required. This routing engine will handle the preparation of views and the necessary transitions between them in a browser-like manner.

For example, in a banking app, the model might predict the user would want to pay off their Verizon bill. It would offer a suggestion that could be expressed as a URL. Something like this:

finTech://feature/paybill?sourceAcct=Checking&payee=Verizon&amount=150

Without a routable infrastructure, responding to URLs like this will look like macro programs from the old days. The user doesn’t want to watch buttons being tapped and views sliding into position. No, if the user wants to pay his Verizon bill, the bill pay screen should be fully loaded with nothing but the Submit button waiting to be tapped.

Start with Natural Language Processing

Our early research with the new natural language processing features of iOS 11 is promising. CapTech sees NLP as a major growth area for all sorts of devices, not just mobile. The new tokenizing features in iOS 11 make it much easier to recognize proper names and to programmatically respond to the user’s voice commands. Both Apple Maps and Google Maps respond quite quickly and accurately to voice commands for directions. The same features should begin to make their way into enterprise mobile apps this year.

NLP is the most mature platform on which to integrate machine learning into an existing app. It allows the user’s voice reveal the user’s next intended action. NLP can also be implemented incrementally, even before the full work of analytics-based modeling is complete.

About the Author

Mark Broski
Mark Broski is CapTech’s Capability Lead for Mobile and Devices.  With over 15 years experience in software development, Mark brings a passion for elegant, test-driven solutions that drive value for our clients. He aims to make CapTech the best place to grow talent that matches both the evolution of technology and the changing needs of our clients.