Last week, my friend – I’ll call her Sally – sent out a group text to our circle of friends. She was horrified and angry about an email she had just received from a national drug store chain that she frequents. It said,
Time for more hair color?”
The message included a picture of the product she buys, correct down to the very shade she uses.
Without exception, each of us who received Sally’s text was appalled. In fact, we all had the exact same reaction. First, a gasp, followed by “seriously?” or “are you kidding?” and then, “that’s outrageous” or “that’s so intrusive” or “that’s way too personal” and, of course, “that’s so creepy.” Then came the questions. “What else are they tracking about you?” “What kind of message are you going to receive from them next?”
Everyone I’ve told about this message has reacted in the same way. Clearly, my unscientific research reveals, this message crossed a line.
This begs the question: How was the decision made to send out this message? Did the decision-maker consider if the message was consistent with the image and reputation of the retailer? Probably not. Was it discussed in advance with a cross-functional, diverse team with consumer representation? Highly unlikely. Did they consider whether they were providing value to Sally, i.e., did they really think anyone with a mirror needs a reminder to purchase hair color?
Companies should consider adopting a framework to guide them before making decisions involving the use of personal consumer data.
This national retailer just alienated the very customer with whom it was trying to connect more personally. But, it provided a good example for the rest of us of a Big Data decision that was lawful, but just awful.
Article was originally posted on LinkedIn on October 2, 2015.