Autocomplete predictions are helpful features in the search box, when you start typing it will automatically guess what you are typing. Autocomplete predictions is a Google Search feature designed to make it easier for you to complete searches that you’re starting to type.
The question is how the Autocomplete predictions are automatically created based on actual searches and how this feature allows you to finish typing the query that you already had in mind.
Autocomplete predictions represent searches that have been performed on Google. Google systems start by looking at popular and trending queries that fit what someone is beginning to enter in the search box to decide what predictions to display. Eg. If you were to type “best star trek … “, then Google would search for the usual completions that would follow, such as “best star trek sequence” or “best episodes of star trek.” See the picture below:

So here’s the explanation from Google how Automatic predictions work at the most basic level, “We don’t just show the most common predictions overall. We also consider things like the language of the searcher or where they are searching because these make predictions far more relevant.”
Another example of Automatic predictions differ in relevant locations. See below:
Below image showing how you can see predictions for those searching for “driving test” in the U.S. state of California versus the Canadian province of Ontario. Predictions differ in naming relevant locations or even spelling “center” correctly for Canadian rather than using the American spelling of “center.”

“To provide better predictions for long queries, our systems may automatically shift from predicting an entire search to portions of a search. For example, we might not see a lot of queries for “the name of the thing at the front” of some particular object. But we do see a lot of queries for “the front of a ship” or “the front of a boat” or “the front of a car.” That’s why we’re able to offer these predictions toward the end of what someone is typing.”

“We also take freshness into account when displaying predictions. If our automated systems detect there’s rising interest in a topic, they might show a trending prediction even if it isn’t typically the most common of all related predictions that we know about.”
For example, searches for a basketball team are probably more common than individual games. However, if that team just won a big face-off against a rival, timely game-related predictions may be more useful for those seeking information that’s relevant at that moment.
“Predictions also will vary, of course, depending on the specific topic that someone is searching for. People, places, and things all have different attributes that people are interested in.”
For example, someone searching for a “trip to New York” might see a prediction of “trip to New York for Christmas,” as that’s a popular time to visit that city. In contrast, “trip to San Francisco” may show a prediction of “trip to San Francisco and Yosemite.”
“Even if two topics seem to be similar or fall into similar categories, you won’t always see the same predictions if you try to compare them. Predictions will reflect the queries that are unique and relevant to a particular topic.”
Overall, Autocomplete is a complicated time-saving mechanism that doesn’t just show the most common queries on a given topic. That’s also why Google Trends, which is a platform for journalists and anyone else who is interested in researching the popularity of searches and search topics over time, varies from and should not be compared.
As explained, predictions are intended to be helpful ways for you to finish completing something you were about to type more quickly. But forecasts, like everything, aren’t flawless. There’s the opportunity to show predictions that are surprising or shocking. It is also possible for individuals to make forecasts as statements of facts or beliefs. Google also understands that it is less likely that any queries lead to accurate content.
“We deal with these potential issues in two ways. First and foremost, we have systems designed to prevent potentially unhelpful and policy-violating predictions from appearing. Secondly, if our automated systems don’t catch predictions that violate our policies, we have enforcement teams that remove predictions in accordance with those policies.”
“Our systems are designed to recognize terms and phrases that might be violent, explicit, hateful, disparaging, or dangerous. When we recognize that such content might surface in a particular prediction, our systems prevent it from displaying.”
“People can still search for such topics using those words, of course. Nothing prevents that. We’re simply not wanting to unintentionally shock or surprise people with predictions they might not have expected.”
“Using our automated systems, we can also recognize if a prediction is unlikely to return much reliable content.”
For example, after a major news event, there can be any number of unconfirmed rumors or information spreading, which we would not want people to think Autocomplete is somehow confirming.
“In these cases, our systems identify if there’s likely to be reliable content on a particular topic for a particular search. If that likelihood is low, the systems might automatically prevent a prediction from appearing. But again, this doesn’t stop anyone from completing a search on their own, if they wish.”
“While our automated systems typically work very well, they don’t catch everything. This is why we have policies for Autocomplete, which we publish for anyone to read.”
“Our systems aim to prevent policy-violating predictions from appearing. But if any such predictions do get past our systems, and we’re made aware (such as through public reporting options), our enforcement teams work to review and remove them, as appropriate. In these cases, we remove both the specific prediction in question and often use pattern-matching and other methods to catch closely-related variations.”
“As an example of all this in action, consider our policy about names in Autocomplete, which began in 2016. It’s designed to prevent showing offensive, hurtful, or inappropriate queries in relation to named individuals so that people aren’t potentially forming an impression about others solely off predictions.”
“We have systems that aim to prevent these types of predictions from showing for name queries. But if violations do get through, we remove them in line with our policies.”
It is also helpful to note that predictions are not searched results, after explaining why certain predictions do not appear. Occasionally, people who are worried about forecasts for a specific query might suggest that Google prevents the appearance of real search results. That’s not the case here. Autocomplete policies refer only to forecasts. For search findings, they do not apply.
“We understand that our protective systems may prevent some useful predictions from showing. In fact, our systems take a particularly cautious approach when it comes to names and might prevent some non-policy violating predictions from appearing.”
“However, we feel that taking this cautious approach is best. That’s especially because even if a prediction doesn’t appear, this does not impact the ability for someone to finish typing a query on their own and finding search results.”