t, landing, tag, category) do a site: domain name and keyword on Google to check if and how you have treated the topic. Above all, I do a search on the aforementioned Google Trends and before creating a post on a topic I evaluate whether it is the case to update what is already online. This way I avoid duplicates.
3 Ways to Fix Duplicate Content for SEO
Ok, the damage is done. You have a series of duplicate contents that you have identified. For example with the Search Console , just study the queries and find out if there are more URLs competing for the same keyword. Even SeoZoom , in the URL analysis , suggests any inconvenient presences. In any case, it is important to make decisions by evaluating the context.
Editing content
The first method to deal with duplicate content: work on a rewrite and, if necessary, on the URL change with a 301 redirect to change direction and help search engines understand that there was an error. And that the two pages deal with different topics.
Then, in some cases you can focus on the merger between two pages: you identify the one that has fewer resources, you copy any useful content and you copy it on the winning page . Then you delete it and you do a nice 301 redirect on the page that we decide to keep.
Delete unnecessary pages
In some cases, the best way to solve the duplicate content problem is to delete unnecessary pages. Some examples? You have two or more semantically identical but non-homonymous tags (example: copywriter, copywriter) or completely similar archives.
The latter occurs when the author page overlaps with the Norway Phone Numbers
main archive (a typical situation for a single-author blog). One more condition: you have dozens of articles that talk about the same topic in a short space of time, as happens for example with magazines when a weather alert is anticipated.
What to do in these cases? Having verified the absence of traffic , ranking and backlinks, we delete what is not needed. Let's face it, we can safely work in this direction also for the media pages and archives by date unless there are needs.
Give directions to Google
In some cases the problem is not UX but SEO . In fact, there are many circumstances in which pages should not be modified or deleted. What to do in these cases?
Canonical: indicates on a web page which version Google should take into consideration. If, for example, I have two similar pages and I want Google to consider only one of them for indexing, I put the URL of the latter in the canonical of the resource to be set aside so Google does not consider it a duplicate. A bit like telling Google: "Ok, it is not valuable content but we need it".
Noindex: The go-to for any situation where you don't want Google to index the resource, regardless of whether there's a related page or not. For example, if you have content on other websites and have permission to distribute it, you can use the noindex meta tag to publish it without repercussions.
In certain situations you can act with the axe and use robots.txt to close certain sections or directories to Google's gaze . Despite everything, these are always indications that we give to the search engine and not absolute constraints.
The only way to 100% prevent Google from seeing duplicate content that you don't want to delete is a password protected section. That can be an idea for news archives or documents that are elsewhere and that you want to allow only certain people to download.
What to do before proceeding?
Careful evaluations, that's what they need: analyses because the risk of penalizing or even worse deleting something that actually has a role in your website is high. This is why often, even at the level of e