The last couple of month’s I’ve been looking at an area called “process mining”– it’s similar to reality mining, but with the goal of figuring out how structured processes, performed by humans, can be tracked and measured by machines. In broad terms, the argument I’ve been working on is that in order to automate and measure the processes in our day to day lives (going somewhere, buying sometime, finding your way around a store), we’ve needed to add in technology to the event/process, and use that technology to generate data (respectively: gps tracking, point of sale systems that log time and purchase, and online stores that track each and every mouse click you make).
These approaches give us new data, but require that we change how we go about doing things, usually making everything transaction based–where the transaction is constructed in such a way that a computer or sensor can understand what’s going on. This doesn’t really need to be the case anymore–computers are getting to the point where they’re smart enough to start understanding what we’re doing without being with us all the time.
Some of the big technologies that I’ve been looking into are video content analysis (VCA), facial recognition, and emotion detection–with the latter two arguably being under the umbrella of VCA. If you walk down the street it’s hard to go a block or two without seeing a video camera keeping tabs on the ebbs and flows of people, and the camera density skyrockets when you head into a store or mall or most any private venue. If we let computers tap into the raw information generated by these surveillance infrastructures, some pretty cool/scary stuff can happen.
Consider walking into a retail store: if you’re in the field of view of multiple cameras, your position in 3D space can be tracked (PDF) and a map of where you walk around the store can be plotted. If you look at a display kiosk and smile or frown, a relatively low-resolution camera can understand your emotional reaction, and add it to the “profile” of you that seem to like or dislike. Finally, when checking out at the cash register, you’re in a prime position for facial recognition software to grab and understand a snapshot of your face (so that you can be recognized more easily next time), and link your purchasing behavior/history (and your credit/debit number) to your customer file.
When all this information is aggregated, simply going to the store to buy some milk turns into an activity that can be broken down and understood. By linking together technologies, companies with retail locations will soon be able to understand the exact paths that customers take through their stores, how often those customers come back, and whether or not they seem to be enjoying the trip–all without changing the customer-facing experience at all.
The scenario above is, so far as I know, currently hypothetical–but based on current, existing technology. You can let your imagination run wild coming up with ways to generate and link data about what people are doing, where they’re going, and what they’re saying. As consumers, we’re going to be seeing a shift where our identity is used to identify, segment, and target us like never before–and is done so as a byproduct of just leaving the house. There’s great promise for the enterprise, but great cause for concern (but also arguably great benefit) for the customer.