Howto avoid to load robots.txt

Before each crawl YaCy loads the robots.txt although I always uncheck “respect the…” as I am only interested in the front page.

  1. Is there a way to stop this behavior?
  2. Is there a way to inrease the number of crawling slots? It’s pretty hard to convince YaCy to use my hardware properly.

After I solved my bind bottleneck I would expect some more performance but I rarely see > 3000 PPM or > 1MBit download (on a 1GBit line). Each domain is expected to be only crawled once (level 0).

I consider to implement an option for this.