Privacy breaches undermine trustworthiness of the Tim Hortons brand

Jordan Richard Schoenherr, Concordia University

The Office of the Privacy Commissioner (OPC) of Canada, along with three provincial counterparts, released a scathing report on the Tim Hortons app on June 1.

A year after the app’s seemingly benign update in May 2019, a journalist’s investigation found that the app was collecting large amounts of users’ location data that could be used to infer their place of work. and their homes, as well as their mobility patterns.

Although the OPC report noted that “actual use of data by Tim Hortons was very limited”, it concluded that there was “no legitimate need to collect large amounts of location information sensitive where he has never used such information for the stated purposes”. This report follows OPC concerns about the government’s use of mobile phone data during the pandemic.

The joint report was met with both openly negative and cynical reactions on social media. Many are not surprised by the data collection practices themselves. Users have probably become desensitized to collecting behavioral traces to create large datasets, a sort of learned helplessness. What is shocking to many is the perceived breach of trust that has traditionally been placed in this crippled Canadian institution.

Everywhere

The Tim Hortons case illustrates our growing entanglement with artificial intelligence (AI) that mirrors the backbone of seemingly benign applications.

AI has permeated all areas of human experience. Home technologies – cellphones, smart TVs, robot vacuums – pose an acute problem because we trust these systems without much thought. Without trust, we would need to check and recheck the inputs, operations and outputs of these systems. But, when people are converted into data, new social and ethical issues emerge due to unqualified trust.

Technological evolution is continuous. This may be beyond our understanding of their operations. We cannot assume that users understand the implications of agreements that reflect a single click or that companies fully understand the implications of data collection, storage and use. For many, AI is still science fiction. Popular science frequently focuses on the formidable and terrifying features of these systems.

At the cold heart of this technology are computer algorithms that vary in their simplicity and intelligibility. Complex algorithms are often described as “black boxes”, their content lacking transparency for users. When autonomy and privacy are at stake, this lack of transparency is particularly problematic. Compounding these issues, developers don’t necessarily understand how or why privacy engineering is needed, leaving users to determine their own needs.

The data collected or used to train these virtual machines often reflects “dark data”, datasets whose content is often opaque due to proprietary or privacy issues. How the data was collected, its accuracy and its biases to have to be clearly established. This has led to calls for explainable AI systems, the function of which can be understood by users and decision-makers to examine how well their operations support social values.

Paths to trust

Our confidence is not always based on facts. A basic sense of trust can be induced by repeated exposure to an object or entity, rather than being hard-earned by experiences of direct exchange or knowledge of social norms of fairness. The problem with apps – Tim Hortons included – stems from these issues.

Despite a brief and temporary drop in confidence, the brand remains a staple in Canada. Tim Hortons stores are a feature of Canada’s physical and consumer landscapes. Our familiarity with the brand makes the collection of products – real or digital – seem innocuous. It is therefore unreasonable to expect consumers to suspect that their location data has been collected every few minutes throughout the day.

According to the Gustavson Brand Trust Index, Tim Hortons was voted Canada’s most trusted brand in 2015. (Shutterstock)

The dark patterns of design

In design, dark or deceptive patterns reflect the active exploitation of design features for the benefit of the application developer or distributor. The most prominent case to date is the Cambridge Analytica scandal, where Facebook user data was used to try to influence how people voted.

Despite declining trust in Facebook, users continued to use the platform with only relatively minor changes in their behavior.

Facebook’s initial response pointed out that: “People knowingly provided their information…and no sensitive information was stolen or hacked.” However, users spend very little time reading privacy policies, and when they are not presented to them, they do not bother to read them.

Claims that data anonymization – the removal of personally identifiable information – can eliminate privacy concerns are also overly simplistic. Merging multiple data sets provides a more complete picture of an individual: what they prefer, how they behave, what they owe, who they date. With enough information, a detailed picture of a person can be created.

In some cases, AI can be as good as humans at predicting personality traits. In other cases, the AI ​​can predict sensitive information that is not disclosed. This could threaten our personal autonomy.

Commit to data ethics

Given the growing capabilities of AI, coupled with a lack of transparency in how data is collected and used, the validity of user consent must be questioned. The OPC judgment speaks to this point: users would not reasonably expect the types or amount of detail collected about their behavior given the nature of the application.

Although this information may not have been used by Tim Hortons, we must consider the unintended consequences of data collection. For example, cybercriminals can steal and sell this information, making it accessible to others. By simply collecting this data, institutions, organizations and companies take responsibility for our information, how it is protected and how it is used. They must be held accountable.

We don’t expect a more convenient way to buy coffee and donuts to lead to privacy violations and the deepening of our digital footprint. The compromise cannot be rationalized.

There is no one-size-fits-all solution to our privacy issues, and many users are unlikely to go offline. Users, developers, distributors and regulators must be brought to establish more direct and transparent relationships with each other. New skills and competencies need to be developed in our education system to make sense of the social consequences of using technology. And more nimble public institutions need to be developed to address these issues.

Jordan Richard Schoenherr, Assistant Professor, Concordia University, Concordia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Source link

Comments are closed.