Indexing Robot (Crawler)identification and policies
How our crawler works, what sources it collects, how it respects robots.txt and how you can contact us.
Identification
User-Agent:
VikiTronBot/1.0 (+https://vikitron.com/bot)
Contact:
- •
abuse@bizkithub.com
(for DMCA/abuse) - •
support@bizkithub.com
(technical questions)
IP Ranges: Will be published here; we recommend whitelisting at CIDR level.
Crawling Policies
We respect robots.txt
, noindex
and nofollow
directives.
Crawl Politeness:
- • Adaptive speed, default 1–2 req/s per host
- • Considers response time and error rates
- • Exponential backoff, max 3 retry attempts
Freshness:
- • Key pages checked more frequently
- • Multi-tier schedule based on change frequency
Collected Data
We collect:
- • DNS records (public)
- • robots.txt files
- • Certificates from TLS handshake
- • Public HTTP metadata (headers, status codes)
- • Certificate Transparency log excerpts
- • Reverse DNS data
We do NOT store:
- • Content behind authentication
- • Personal data from forms
Opt-out / Restrictions
Block user-agent in robots.txt:
User-agent: VikiTronBot Disallow: /
Reduce crawling speed: Use
Crawl-delay
directive.For sensitive directories: Prefer combination of
Disallow
and server rules (401/403).Test Your Robots.txt
Want to see how VikiTronBot interprets your robots.txt file? Use our free tool to validate syntax and test specific URLs against your rules.
Check Your Robots.txt →Bot Verification
We send From:
/User-Agent
headers as per identification above.
On request, we can fetch verification URL/token like:https://example.com/.well-known/vikitron-verify.txt