Before a crawler starts crawling your website, it will request a file known as ‘robots.txt’. This file tells web bots which pages it has permission to crawl and what content it can index. They are generally used to avoid overloading your site with crawl request bot traffic, but most importantly, to keep a file or page off Google, like password protected pages or those with similar content.
Here’s more info about robots.txt and how to create one if you want to know more.
Another time-effective method to ensure your pages get indexed is to check or create your sitemap. A sitemap has many functions but primarily exists for crawlers to quickly identify and index important pages on your website.
There are two types of sitemaps, HTML and XML. HTML sitemaps are written for people and viewable on your page to help users navigate through the pages on your website but in terms of indexation, XML is the important sitemaps as they are designed specifically for crawlers and can quickly identify the important pieces of information about your site from it.
After you’ve evaluated or created your sitemap, submit it to the major search engines to get them crawling.
Here’s the Google Developers support article on how to build and submit your sitemap.