>>21981
Thanks Anon, you're an invaluable encouragment to us all!
I like seeing these GUI images used for scoring/ranking. My brain actually connects the dots behind the scene better if I have something pertinent to look at (even if in hindsight it's apparently obvious). Moar! :^)
Noidodev said we need to scrape more. (
>>21968) Using cURL to perform automated, parallel downloads (text, images, etc.) is a task we've already solved here (albeit with many refinements yet possible). I have little else to offer our AI side of the house, but possibly together we can all brainstorm our own DIY data harvesting operations? We should be able to easily spread that out amongst any willing anons here, if we can figure out some reasonable way to effectively-agglomerate that data back to some central repository or other (probably one of our own devising)?
Also, this could be regularly-updated at lower bandwidth costs (once the initial downloads are performed) on data that itself is incrementally-updated. We built in the ability to optimize bandwidth by simply first downloading and comparing the HTTP header data (alone), and comparing it to past saves by both date & size. By avoiding repititious downloads within their
volunteer data-harvesting 'sectors', this should allow anons to multiply the usefulness of their available bandwidths; both to and fro.
Just some spitballing some ideas here Anon. Thanks again, cheers! :^)
>===
-
prose edit
Edited last time by Chobitsu on 04/15/2023 (Sat) 08:32:26.