Please reload

Recent Posts

Sexual abuse 'endemic' in aid sector, say MPs

July 31, 2018

Please reload

Featured Posts

Google pushes for an official web crawler standard

July 2, 2019

Engadget ---------- One of the cornerstones of Google's business (and really, the web at large) is the robots.txt file that sites use to exclude some of their content from the search engine's web crawler, Googlebot. It minimizes pointless indexing and sometimes keeps sensitive info under wraps. Google thinks its crawler tech can improve, though, and so it's shedding some of its secrecy. The company is open-sourcing the parser used to decode robots.txt in a bid to foster a true standard for web crawling. Ideally, this takes much of the mystery out of how to decipher robots.txt files and will create more of a common format. To learn more click on the picture below to read the article.



Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload

Search By Tags
Please reload

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

© 2020 Safi Bello A Girls How To Guide