Who Watches The Algorithms?

Who Watches The Algorithms?

Internet TECHNOLOGY

Algorithms are present in many areas of our lives. When we search for something on Google, its algorithms offer us the most relevant results based on a set of parameters. The same thing happens when Netflix or Spotify recommend a series or a song, when Facebook shows us one or another content and offers us suggestions, when a dating app chooses profiles similar to ours, etc.

They also intervene in other more transcendental matters. For example, banks use algorithms to decide whether to grant us a mortgage. They are also behind every decision an autonomous vehicle makes while driving. And many companies are using algorithms to screen applications in their selection processes in job offers.

Thus, the digitization and automation of all kinds of tasks have made algorithms intervene almost every moment in our day-to-day life, guiding our future. Therefore, it is worth asking what these algorithms are, how they are designed, and if they are neutral or can be conditioned by certain biases.

What are algorithms?

An algorithm is a set of operations, instructions, steps, or defined rules that allow resolving a specific problem. A priori, algorithms suppose an objective way of making decisions based on elements or inputs. From this data, the machines only have to follow the predetermined steps to find a solution.

However, the objectivity of algorithms is not complete if their creation depends on computer programmer who are responsible for translating real situations into machine language. Thus, they can grant more or less relevance to some factors, conditioning the solution they arrive at. In addition, the response of the algorithms will depend, to a large extent, on the data with which they are fed.

For example, the Twitter algorithm makes it possible to offer each user content tailored to their profile, valuing different elements and giving each one a different weight, taking into account aspects such as the currency of the tweet, whether it offers multimedia content, tweet interactions, an account that publishes it, age and intensity in the relationship between the history of the issuer and the receiver, etc. Someone decides all these parameters, so the final decision is not purely objective.

This must be added a risk linked to the growing complexity of technology. Artificial intelligence techniques have evolved so much that not even the owners of the algorithms know precisely how they work or how they come to make their decisions, becoming ‘black boxes.’

Risk of discrimination

Algorithms are not immune to the danger of possible unintentional bias. Several cases have already been detected in which discrimination against different groups has occurred. For example, the Netherlands has banned an algorithm that harms the most disadvantaged people.

The problem lies in an analysis system used to detect possible fraud against the State. Said system is used to study multiple taxpayer data -income, taxes, pensions, subsidies, insurance, type of residence, fines, integration, education, debts, etc.- to calculate, through algorithms, their propensity to defraud the Administration.

The controversy arose from a UN special rapporteur on poverty and human rights report, who warned that this tool stigmatizes citizens with lower incomes and immigrant origins. A Dutch court has recognized that the system violates the privacy and rights of citizens since the risk model developed can have unwanted effects, such as discrimination against certain citizens.

Another example is the controversy surrounding the credit limit of the Apple Card, which seems to have a gender bias. An American businessman published on his Twitter account that the company granted him a credit limit 20 times higher than the one given to his wife, even though they filed a joint tax return and that she has a better credit score.

On the other hand, algorithms depend on the data they are based on. For example, it has been found that facial recognition algorithms are fed with data sets that contain more faces with Caucasian features, so they are less well trained to recognize looks of other origins. This bias can lead to future problems with police or security forces if misidentifications occur at places like airports or borders.

The possible perpetration of some stereotypes is also a risk. Amazon decided to stop using an algorithm to select candidates for its job offers after discovering that it discriminated against women. This tool was fed with the profiles of job applicants from the last decade, mostly made up of men. In this way, the system’s artificial intelligence inferred that male profiles were preferable.

Leave a Reply

Your email address will not be published. Required fields are marked *