MARKET REPORTS 

Google Says It Will Block Autocomplete for Searches That Suggest Voting Outcomes

At Google, we aim to empower users by offering tools to help them discover relevant information. We have long embraced the importance of the rights of marginalized communities in helping users find the answers they need. One example is Google’s help with identifying & removing hate speech, which helped police teams distinguish between violent graffiti and legitimate hate speech in recent days.

As the world searches on information, many form meaningful queries, ranging from “why does an eye turn inside out?” to “how do elephants walk on the ground?” To provide relevant information, we sometimes show results with speculative definitions or synonyms for words, in the hopes that those clues will help users learn more about relevant topics. These engines are a tool, not a ranking or searchable truth. They serve up links and information, and they inform us about potential queries to take into account as we work to provide comprehensive and useful online resources. That said, in 2018, our collaborative effort to listen to how users express themselves about race yielded a very small number of queries that had serious implications. We do not want people to imagine we have considered revoking access to information – that’s not a conversation we’re ready to have.

“It feels obvious, but no, it wasn’t.”

Who understands why one would search “what do white people think about minority groups,” while another considers different results from the first search? Who gets upset that another person who researches this topic and searches the same questions would get results that could imply that they have prejudiced thought. Most experts agree that these searches have no harmful or even questionable intent. But that doesn’t stop them.

Across both technological and philosophical points, we think we have an obligation to defend against these abusive behaviors. So, we’re working to use machine learning to give more transparency about this work to the public, and to make a more detailed public disclosure about the results of questions we receive from our users. We will begin rolling this out soon, but at this time it will not include questions from active campaigns or any queries that people input as part of a campaign. To be clear, this does not indicate that any search result is or will be removed because of this topic. This is a disclosure about the nature of our discussion with users, and does not suggest that any result will be removed.

In the future, the topics we’ll routinely examine are also likely to include bias-related queries. Given the increasing interest in the debate over the utility of Google Search in this area, we’ll be posting more information on the post next week.

A Google spokesperson responded to a question by telling me, “As we’ve said before, algorithmic content change reviews are designed to be transparent, and to encourage feedback from our users. That said, algorithmic reviews are not intended to be 100% foolproof, and potentially incorrect results are a fact of life for every search engine. Google is continually looking for ways to make our algorithms better.”

Related posts

Leave a Comment