Problem with data pulled from website

I have successfully created a random perk generator for the game Dead by daylight which pulls the data from the official wiki, but the only problem with it is it’s not picking up all of the perks from the website and I’m not really sure why. I’ve tried to point it to the correct part of the page several times (losing tokens each time) but it still comes back with missing perks that are on the web page. it does seem to be the newer released perks that are missing, so is it getting the info not from what’s on the page but from the website code or something. anyone got any ideas?

I don’t really want to use my last token without understanding why it can’t see what I can see clearly on the website.

Any help would be greatly appreciated.

2 Likes

Try following up with this prompt:

Diagnose and fix why my Dead by Daylight perk scraper misses NEW perks from the official wiki. Assume perks may be injected by JavaScript, hidden behind tabs/“load more,” or spread across subpages. Provide: (1) a brief root-cause analysis; (2) a complete, runnable script (choose Python or JavaScript) that renders the page with JavaScript enabled, waits for network/DOM to settle, then extracts perks from the master list and “Category:Perks,” plus any DLC/chapter pages; it must follow pagination, click/reveal lazy-loaded sections, and traverse tabs. Implement resilient selectors, a fallback that detects and fetches any JSON/data endpoints the site exposes, and polite delays. Normalize/clean (trim, casefold, de-HTML, unify special characters), dedupe, and output perks.json with objects {name, url, icon_url, source_page, last_seen_iso}. Add integrity checks (e.g., compare counts to category totals), a diff of previous vs current, and clear logs of skipped sections. Return 100% coverage including newest perks or fail with a precise reason and suggested selector updates. Include comments explaining how to update selectors if the wiki structure changes.

I did try this but it’s been stuck doing it for 2 days and I can’t stop it, when I go back to the website it still says task in progress. I’ve tried pressing the stop button but it carry’s on.