Studies on the validity of the Human-Computer Trust Model (HCTM) suggests that there are three important factors:
- Perceived Risk: the user’s subjective assessment of the likelihood of a negative outcome and how concerned they are about such an outcome.
- Benevolence: the ability of the technology to help the user and act in their best interest.
- Competence: can the technology perform the functions promised?
Interestingly, of these factors, competence seems to have the greatest effect on trust and adoption. This means that the inconvenience of having to manually press a button on a vacuum that should be autonomous, may break a consumer’s trust in technology more than the very real possibility of the technology being used nefariously against them.
What does this mean for the design of technologies?
One takeaway is that security is not as on top of mind as it probably should be or appears to be. In a study that looked at consumers’ adoption of IoT technologies, their perceived risk of security and privacy did not reduce their adoption. Despite that, people generally rate privacy and security as concerns. At this point, maintaining the trust of consumers around privacy and security only requires complying with security standards and data protection acts.
However, this is a new area of research and that can change. Thus, experts in the field recommend the more proactive approach of “trust by design”. Instead of waiting until consumers lose trust in technology due to perceived risk, be benevolent. Make privacy a default setting, use the latest security protections, and be transparent in your practices. Do not wait for policies to catch up with consumers’ demand for privacy and security.
The most important aspect to build trust with users is creating technology that is useful and dependable. The good news here is that this principle merely reinforces good design practices. Any way you slice it, you need to invest in knowing consumers. This makes user-centered design more important than ever before because a trustworthy product is a premium product.
Now, a lot of the functionality of the IoTs are dependent on connectivity. The reliability of that connectivity is generally out of the manufactures control. For example, because of the outage issue with AWS, many of iRobot’s Roomba consumers have less trust in their vacuums.
Power and internet outages are nearly guaranteed to occur at some point and are out of your control. But, like many of life’s problems, even though you cannot control what happens, you can control your reaction to the problem. Determine all the ways that the product can fail, assume it will fail, and design ways to handle those failures.
When designing for failure, it can be helpful to think about a different aspect of trust: reciprocity. People tend to anthropomorphize technology. The way they interact with the technology conforms to social norms, and they expect their devices to reciprocate those norms. The more the device does, the more the consumer trusts the technology. The process is not unlike developing a trusting relationship with a person.
So, if your product were a person, how would you like them to handle unforeseen difficulties from a branding perspective? Is your product the sarcastic friend that makes a witty joke to alleviate tension, but doesn’t really solve the problem? Or maybe you are the can-do resourceful type that has backed up essential information from the cloud in your local hard drive. You might not have all your services available, but you still got their back. Just don’t be the friend who shuts down and bails when things get tough. No one likes that guy!
Trust Predicts Technology Adoption
New technologies by their very nature are tough sells. The initial buy-in depends upon the consumers’ trust in the company and product. And it turns out that the most important aspect of that trust is functionality.
Trust by design requires you to manage consumers’ perceived risk, be benevolent, and most importantly, build a product that meets the consumers’ needs. This means meeting and exceeding privacy and security guidelines. This means communicating with consumers in a consistent, social manner in all your interactions. And supporting them in times of difficulty by having a failure management plan that is in line with their social expectations.
– Written by Jennifer Seaton
In November, many North Americans were outraged when Amazon Web Services (AWS) was down in many regions. As expected, many businesses that rely on AWS hosting were affected. Large social media sites such as Twitch, LinkedIn, and Facebook all depend on AWS cloud-based products.
But what was a little less expected was how much it would affect the day to day lives of many North Americans. Many smart homes run on AWS including self-vacuuming Roombas, Ring video doorbells, and smart lights. And the number of smart home devices is expected to grow. This incident passed fairly quickly. But it was another demonstration of our growing dependence on the services tech companies provide.
So far, consumers are optimistic about the future of the Internet of Things (IoT). A worldwide survey found that 71% of people believe that IoT has the potential to improve their lives. However, this optimism is tempered by growing concerns about the lack of security and privacy. It seems that although consumer trust is strong now, it is not guaranteed to stay that way.
For this reason, it is important to understand how consumers develop trust in technology. Are there metrics that could be used to gauge the trust of a consumer and do those metrics predict technological adoption?