Quick & Easy Technical SEO Tips

Quick & Easy Technical SEO Tips

Learn quick and easy technical SEO tips to fine-tune your site for better visibility, speed and more. Read now.

Written by

Roy Zhai

Published

26 October 2022

Categories

Acquisition

SEM

Do you have a nagging feeling there’s more you can do to your site to improve its performance, visibility, engagement rates, etc.? If so, you’re in luck!

In this article, we’ll walk you through the process of auditing your website from a technical SEO perspective so you can address those concerns and learn how to resolve them if you need to.

Each fix should take you an hour or less to complete. Maybe try one a week, or if you’ve got some downtime on the horizon, aim for one a day.

We’ll cover everything from crawling and indexing to site speed and security. So, whether you’re brand new to technical SEO audits or just looking for a refresher, read on for tips and tricks that will level up your skills and supercharge your site no matter how short on time you are.

What you’ll need

To identify key issues in a technical SEO audit you need the right toolset and thankfully, there are plenty to choose from.

Here’s a list of the most user-friendly and effective tools available to give you a head start:

  • Screaming Frog – A powerful crawler tool that identifies common onsite SEO issues with many advanced features.
  • SEMrush – Another crawler that analyses the health of a website and lists key areas where the website is struggling.
  • JetOctopus – Great for enterprise-level websites (more than 100,000 URLs) that provide a comprehensive technical audit.
  • Ahrefs – Our favoured tool for analysing backlinks & anchor text with a built-in site audit function to quickly ID any tech issues.
  • Page Speed Insights – A free Google developer tool that provides insights into page-related issues reducing load times, and more. Screaming Frog has an API connector that can include this information on all URLs crawled. Extremely useful when conducting large-scale audits.

Look at each of these, decide which one suits your purposes, and your first task is done.


Crawlers, indexing & sitemaps

1. Crawlers

Crawlers (commonly known as spiderbots or web spiders) such as Googlebot will crawl websites to add and sort pages categorically within a search engine. These bots are responsible for indexing publicly available pages to organise content for users based on their specific search query. Crawlers also designate a certain budget per crawl.

The budget refers to the number of pages a bot can crawl and index on a website within a specific timeframe. This signifies the importance of factors such as page speed and internal links, which we will discuss below.

It’s important to get indexation right. To use a metaphor, a website with non-indexed pages is like a shop in the desert. It doesn’t matter what the shop sells; if no one knows about it – if no one can find it or get to it – it won’t earn you any business.

2. Are your most important pages indexed?

An index in SEO refers to the database search engines use to store information. They contain the data of all the websites that crawlers can find.

If a website or a specific page is not listed in a search engine’s index, then users won’t be able to find it, and that’s why it’s important to review your pages and ensure that they are indexable and easy for bots to find.

Here are some things you can do to help that process along:

2.1 Check no-index tags

Use Screaming Frog for a quick way to see if there are any meta tags blocking your pages from being indexed or you can use SEO Optimiser’s free tool. When your crawl is finished, look for ‘no index’ tag to see if affected pages have ranking potential. If they do, this needs a fix.

You’ll need to remove the ‘no index’ tag from your page’s HTML code to remedy this issue. You may need access to the frontend HTML code and might require assistance from a developer. If you are using a CMS, there us most likely there is an option enabled that prevents indexing of the page which you turn off. If you’re using an SEO tool like Yoast or RankMath, you can make those updates in the plugin.

2.2 Internal & external linking

As mentioned earlier, getting your pages indexed is crucial in the digital space and helping crawlers through internal linking will do exactly that.

The Googlebot uses internal linking as a method to identify important pages and prioritise those that have many internal and external links pointing to them. Internal links will also send the Googlebot to all the different pages where internal links are coming from. Auditing your internal and external links through your preferred crawler is important, as broken links can reduce your website's crawlability and user experience.

Ask yourself these questions when you’re looking at internal and external linking:

  • Are any links pointing to 404 errors?
  • Am I linking to the most important pages?
  • How many clicks does it take to reach important pages?
  • Is the anchor text for my links consistent with the page’s content?

2.3 Robots.txt

Before a crawler crawls your website, it will request a file known as ‘robots.txt’. This file tells web bots which pages it has permission to crawl and what content it can index. They are generally used to avoid overloading your site with crawl request bot traffic, but most importantly, to keep a file or page off Google, like password-protected pages or those with similar content.

Here’s more info about robots.txt and creating one if you want to know more.

2.4 Sitemaps

Another time-effective method to ensure your pages get indexed is to check or create your sitemap. A sitemap has many functions but primarily exists for crawlers to quickly identify and index important pages on your website.

There are two types of sitemaps: HTML and XML. HTML sitemaps are written for people and viewable on your page to help users navigate through the pages on your website but in terms of indexation, XML is the important sitemaps as they are designed specifically for crawlers and can quickly identify the important pieces of information about your site from it.

After you’ve evaluated or created your sitemap, please submit it to the major search engines to get them crawling.

Here’s the Google Developers support article on how to build and submit your sitemap.

Identify major errors – 4xx & 5xx errors, redirect loops, metadata

4XX & 5XX errors

After you’ve finished reviewing indexation, the next thing to do is to check for any major crawl errors on site.

Crawl errors happen when a search engine fails to reach a page on your website. These are generally known as 400 or 500 errors.

Screaming Frog, Jet Octopus Deep Crawl and Semrush can help you identify these issues.

When you’ve finished crawling your site:

  • export the data and review the status codes
  • look for 400 and 500 errors
  • determine where you want to direct these pages to
  • implement 301 redirects to make it happen.

Not sure how? Here’s an Semrush article that will help.

Identify and fix redirect routes

Simply put, a redirect loop occurs when a URL has multiple redirects, sending the original URL to another URL, which redirects to the initial URL requested. Redirect loops are often the result of poor redirect configuration and result in users and crawlers getting stuck in a loop on your website. From a user experience perspective, errors are displayed on the page because the user will never reach the requested page.

As for SEO, redirects pass on ranking signals like authority and relevance, also known as ‘link juice’. The final URL never resolves in a redirect loop, meaning these ranking signals are lost.

Screaming Frog (under the reports tab) allows you to export all your redirects on-site and identifies any URLs with any chains to be fixed.

Metadata – Page titles, meta descriptions & heading tags

Next up? Let’s all your page’s metadata, including page titles, meta descriptions and H1 tags, to ensure they’re optimised and relevant to the page’s content.

1. Page titles

Page titles should be between 40 – 60 characters and include the keyword you’re targeting. This helps users and crawlers to get a quick gauge of what your content is about, which can affect how well your page ranks in organic searches for specific queries.

Backlink.io analysed CTR data across 1,312,881 pages recently and discovered that titles within the 40 – 60 range have an 8.9% better average click-through rate than those outside it.

2. Meta descriptions

Meta descriptions are snippets that show up under your organic listing. They should entice searchers who are on the lookout for content like yours and act as a kind of mini ad. A well-composed meta description won’t have an impact on your ranking, but it’ll significantly boost your click-through rate when a user discovers your listing.

Your meta description should be no longer than 155 characters and no more than 920 pixels. Use this tool to optimise yours.

3. Heading tags

Heading tags give structure to your page’s hierarchy. They’re used to separate headings and subheadings on your page.

• The H1 header introduces the topic of the page. It’s the most important signal to crawlers as they use these tags to identify content and understand the page. They also improve accessibility for people who can’t easily read screens.

• The H2 headers break down the content into sections, describing the main topics that will be covered in each section of the article.

• Subsequent headers (H3-H6) act as additional subheadings within each section, as needed.

What makes a good article? One that’s scannable, easy to read and offers a clear structure, and that’s what you achieve when you use header tags effectively.

Google likes articles that are scannable because they’re more likely to perform well in the search engines. Scannable content gets read more and it’s more likely to get shared as well.


De-Duplicate

If you don’t set the HTTP and HTTPS versions of your website correctly, both versions can get indexed by search engines which will then cause duplicate content issues (as can www versions and no www versions. To avoid these kinds of problems, prioritise one version (always HTTPS if you’re taking payments or asking for sensitive information).

Here’s article by Semrush with a link to their SEO auditing tool will identify http v https issues.

Got a problem? One way to establish your pages' primary HTTP or HTTPS versions is to add canonical tags to duplicate pages. These tags tell search engines to crawl the page with the main version of your content. Find out more about that here via Ahrefs’ Canonical Tags: Simple Guide for Beginners. 

You can also set up a 301 redirect (you’ll remember those from earlier on in the article) from your HTTP pages to your HTTPS pages. This way, you can make sure that users only access secure pages on your website.

Core web vitals

Google considers user experience a ranking factor, so reviewing your website’s core web vitals is important.

Core web vitals refer to the performance of your page, and it’s broken down into three categories:

  • Largest Contentful Paint (LCP): The LCP metric measures your website’s loading performance. To provide a good user experience, LCP should resolve within 2.5 seconds of when the page first starts loading.
  • First Input Delay (FID): FID refers to your website’s interactivity, i.e., how long it takes before a user can interact with your page. Pages should have an FID score of 100 milliseconds or less.
  • Cumulative Layout Shift (CLS): The CLS metric measures the visual stability of your pages, which can be seen in any assets on the page shifting around the page. A good user experience score should maintain a CLS of 0.1 or less.

There are a few tools that can help identify issues that affect core web vitals: Lighthouse (an inbuilt developer tool on Google Chrome) or page speed insights tool.

Screaming Frog has an API connector that integrates page speed insights into their crawl, making it simple to identify common core web vital issues across your pages.

These reports will make recommendations where there is room for improvement across these metrics. Some fixes are large, but many are small.

Get your report and prepare a plan to tackle the issues that are preventing your site from passing the Core Web Vitals assessment to boost your SEO rank (and give your site visitors a better user experience in the process).

The dog and bone.
Subscribe and be the first to hear about news and events.

Written by

Roy Zhai
The dog and bone.
Subscribe and be the first to hear about news and events.
View our last posts

Indago Acquired by Nunn Media – October 2024

Gary Nissim - 2 min read
Google AI Overviews funny parrot example

An Overview of ‘AI Overviews’

Lewis Torossian - 9 min read
Realistic robot pondering how to steal traffic with GA4 logo

How to Identify AI-Referred Traffic in Google Analytics 4

Preet Singh - 5 min read
Team members thinking together
Get in touch
Ready to get the ball rolling? Drop us a line.
First name*
Last name*
Email address*
Phone number*
Company*
Your message