What are recommendations on Facebook?

We make personalized recommendations to the people who use our services to help them discover new communities and content. Both Facebook and Instagram may recommend content, accounts, and entities (such as Pages, Groups, or Events) that people do not already follow. Some examples of our recommendations experiences include Pages You May Like, “Suggested For You” posts in News Feed, People You May Know, or Groups You Should Join. Some entities might have limited or no access to features that encourage engagement, and might not be as widely recommended on Facebook as other entities.
Our goal is to make recommendations that are relevant and valuable to each person who sees them. We work towards our goal by personalizing recommendations, which means making unique recommendations for each person. For example, if you and another person have Facebook Friends in common, we may suggest that person as a potential new Friend for you.
What baseline standards does Facebook maintain for its recommendations?
At Facebook, we have guidelines about what content we will recommend to people. Those guidelines fit into a strategy we have used to manage problematic content on Facebook since 2016, called “remove, reduce, and inform.” This strategy involves removing content that violates our Community Standards, reducing the spread of problematic content that does not violate our standards, and informing people with additional information so they can choose what to click, read or share. Discussion of our “reduce” work on Facebook has often centered on News Feed and how we rank posts within it. However, our Recommendations Guidelines are another important tool that we use to reduce the spread of problematic content on our platform.
Through our Recommendations Guidelines, we work to avoid making recommendations that could be low-quality, objectionable, or particularly sensitive, and we also avoid making recommendations that may be inappropriate for younger viewers. Our Recommendations Guidelines are designed to maintain a higher standard than our Community Standards, because recommended content and connections are from accounts or entities you haven't chosen to follow. Therefore, not all content allowed on our platform will be eligible for recommendation.
In developing these guidelines, we sought input from 50 leading experts specializing in recommender systems, expression, safety, and digital rights. Those consultations are part of our constant efforts to improve these guidelines and provide people with a safe and positive experience when they receive recommendations on our platform.
We want to provide you with more information about the types of content, accounts, and entities that we try to avoid recommending, both to keep our community more informed and to provide guidance for content creators about recommendations.
Content Recommendations
There are five categories of content that are allowed on our platforms, but that may not be eligible for recommendations. These categories are listed below, as are some illustrative examples of content within each category.
Content that impedes our ability to foster a safe community, such as:
  1. Content that discusses self-harm, suicide, or eating disorders. (We remove content that encourages suicide or self-injury, or any graphic imagery).
  2. Content that may depict violence, such as people fighting. (We remove graphically violent content.)
  3. Content that may be sexually explicit or suggestive, such as pictures of people in see-through clothing. (We remove content that contains adult nudity or sexual activity.)
  4. Content that promotes the use of certain regulated products, such as tobacco or vaping products, adult products and services, or pharmaceutical drugs. (We remove content that attempts to sell or trade most regulated goods.)
  5. Content shared by any non-recommendable account or entity (e.g. Groups or Pages, as outlined below).
Sensitive or Low-Quality Content about Health or Finance, such as:
  1. Content that promotes or depicts cosmetic procedures.
  2. Content containing exaggerated health claims, such as “miracle cures.”
  3. Content attempting to sell products or services based on health-related claims, such as promoting a supplement to help a person lose weight.
  4. Content that promotes misleading or deceptive business models, such as payday loans or “risk-free” investments.
Content that Users Broadly Tell us they Dislike, such as:
  1. Content that includes clickbait.
  2. Content that includes engagement bait.
  3. Content that promotes a contest or giveaway.
  4. Content that includes links to low-quality or deceptive landing pages or domains, such as landing pages filled with click-through or malicious ads.
Content that is associated with low-quality publishing, such as:
  1. Unoriginal content that is largely repurposed from another source without adding material value.
  2. Content from web sites that get a disproportionate number of clicks from Facebook versus other places on the web.
  3. News content that does not include transparent information about authorship or the publisher’s editorial staff.
False or Misleading Content, such as:
  1. Content including claims that have been found false by independent fact-checkers. (We remove misinformation that could cause physical harm or suppress voting.)
  2. Vaccine-related misinformation that has been widely debunked by leading global health organizations.
  3. Content that promotes the use of fraudulent documents, such as someone sharing a post about using a fake ID. (We remove content attempting to sell fraudulent documents, like medical prescriptions).
As noted above, we take additional steps to avoid recommending certain types of sensitive content to minors on Facebook. For example, we strive to build our systems so as to not recommend content that promotes the sale or use of alcohol to users who are minors.
Account and Entity Recommendations
We also try to not recommend accounts (including Profiles and Page admins) or entities (such as Pages, Groups, or Events) that:
  1. Recently violated Facebook’s Community Standards. This does not include accounts or entities that we otherwise remove from our platforms for violating Facebook’s Community Standards.
  2. Repeatedly and/or recently shared content (including the names or cover photos associated with groups or Pages) we try not to recommend across the categories described in the Content Recommendations section above.
  3. Repeatedly posted vaccine-related misinformation that has been widely debunked by leading global health organizations.
  4. Repeatedly engaged in misleading practices to build followings, such as purchasing ‘likes’.
  5. Have been banned from running ads on our platforms.
  6. Recently and repeatedly posted false information as determined by independent third party fact-checkers or certain expert organizations.
  7. Are associated with offline movements or organizations that are tied to violence.
We may let people know when they're about to engage with an entity that meets any of the above criteria to help them make informed decisions.
A similar set of these guidelines applies to recommendations on Instagram. Those guidelines can be found in the Instagram Help Center.
Was this helpful?
Yes
No