Pikmin Archives is a group dedicated to collecting developer notes and promossional material on the Pikmin series of games.
The Pikmin series are RTS games where players must guide a swarm of aliens to thrive in the wild! The Pikmin games are very memorable for their unique artstyle contrasting everyday objects over sci-fi technology and fantastical nature. Celebrating that artstyle, Pikmin Archives is focussed on documenting the creative process behind the Pikmin games.
For example, Pikmin Archive member Flamsey restored the old Pikmin 2 USA website which had ceased to function due the discontinuation of Flash Player.
Pikmin Archives is most active on their Discord server which is a hub for exchanging files and fostering discussion. There, a dedicated #archive-submissions channel is used to submit media and submissions are then curated by the mod team.
Occasionally, members might post their findings to Twitter; but there is no dedicated Pikmin Archives social media account or website at this time.
Just hop on board their Discord Server!
So what are you waiting for? Become a Pikmin Archivist, today!
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our other Community Spotlights.
]]>glmdgrielson: So, first question, what is Gaming Alexandria?
Hubz: At it’s core it’s both a Discord community and a separate website dedicated to preserving various aspects of video games, such as scans, interviews, unreleased games, youtube videos etc. It mainly started as a site where I could share high quality scans but has grown thanks to many people joining up with various skills to help expand the website. The Discord community itself is really an entity unto itself at this point where lots of gaming historians/preservationists have come together to share their works and also help each other out when needed with various projects. I love getting to see all the passion in everybody’s projects that they put forth and the willingness of the community to offer help when asked.
g: Tell me more about this community. I’m active in the server, but what does it look like from your end?
H: From an admin standpoint I have access to all the channels which include the private #staff
and #mods
channels where we discuss upcoming articles or projects for the site as well as handling the occasional argument or bad apple in the chat. Dylan Mansfeld (DillyDylan
) handles a lot of great articles on undumped/prototype games that were previously unreleased. Ethan Johnson writes for his own blog (https://thehistoryofhowweplay.wordpress.com/) and Gaming Alexandria at times and is our editor so he glances through and cleans up all the articles that get posted. Jonas Rosland who is the Executive Director of the NPO, I’m a board member of, called Hit Save (https://hitsave.org/) does a lot of thankless technical work behind the scenes that includes a NAS he has setup for not only the staff of the website to store project files but the community at large which is a huge help. Wietse van Bruggen (Densy
) handles a lot of the moderation of the chat and has been a huge help keeping the Discord community friendly and clean with his balanced moderation style. Last but not least there is Stefan Gancer (Gazimaluke
) who did the original site redesign and has been a great idea man for ways to improve the site and community as time has gone on. For me personally I try to keep up with all the chat in the channels (though it can be tough at times!) just to have an idea of what’s going on and seeing what I can help with or connect people to further projects as well as post my scans and projects as they’re completed. Thanks to the rest of the staff I rarely have to step in and moderate which is very nice!
g: I’m going to skip over the omission of Norm and ask about the history of how the site has evolved.
H: LOL yes Norm is a menace to society and must be stopped.
Editor’s note: Hubz has a mock rivalry with Norm, a.k.a. the Gaming Historian and is a frequent running gag on the server. I do not believe there is actual malice.
The website itself started officially on October 23rd, 2015 and was just a basic text website that I could easily upload to in order to share my scans, it was very barebones. The reason I wanted to get high quality scans out was due to using an emulator frontend called Hyperspin. For popular systems it had a lot of decent quality artwork for boxes. But for lesser known systems it was sorely lacking and that triggered my OCD and made be realize that scanning stuff in high resolution was something that needed to be done. Slowly, but surely, I met others that wanted to scan in high quality and have their stuff hosted and they would submit stuff such as Densy
. At some point I got involved with the VGPC
discord and met Kirkland
who had been quietly doing something similar with his collection and collaborated with him and others on establishing scanning standards to use going forward to have some level of consistent quality with those that were willing to do it which eventually led to what is the https://scanning.guide/. In late 2018 the site was graciously redone by Gazimaluke
and relaunched in the design you see now. We started branching out into actual articles written by our staff and releasing prototypes and unreleased games that we came across. The site continues doing this to this day, though we are branching out into more guest authors from the community posting interviews and articles as well in the near future.
g: As well as hosting my site, for which I am grateful for. So, what is the day to day like for you?
H: Day to day on the scanning I try to get at least one magazine done daily. Doesn’t always happen but, in general, I debind a magazine the night before, then in the morning scan it in before leaving for work. If work gets slow I work on processing the scans, or else I’ll do it later that night and get them uploaded to the site and the Internet Archive.
g: Interesting. So how big do you think your archive is by this point?
H: Archive upload-wise I’m probably right around 2900 items if you count stuff that was removed lol. Then there’s a bunch on the site that wasn’t done to the higher scanning standards I go by now that’s not on the archive. So I’d guess in the 3000-4000 item range currently.
g: Do you know how big it is in terms of filesize?
H: Let me see real quick…
Looks like 2.5TB
which is another reason I’m so thankful to have the Internet Archive to host my scans on due to the space and bandwidth that would be required otherwise.
The site alone usually has about half a TB of traffic per month so I can only imagine what it would be like if the magazine scans were also hosted directly on it.
g: Neat. Is there anything interesting that you got to be a part of due to GA that you would like to share?
H: Biggest thing is probably working with The Video Game History Foundation on scanning their extensive magazine collection so digital copies can be provided along with physical copies at their library. Being able to leverage the Internet Archive so people all over the world can easily access the magazines I’ve scanned that they might not have been able to easily otherwise is a great feeling personally for me. So many of these things are quite difficult to acquire and expensive as time goes on so having them as an ally in the preservation world is a godsend. There’s been lots of other connections and other projects I’ve worked on as well but I won’t ramble forever on that. Not only is Gaming Alexandria a tight community that likes to help each other out but there’s plenty of other preservation groups like VGHF, TCRF, and Hidden Palace just to name a few and we all get along great and try to push preservation forward together.
There’s so much work that needs to be done that we need all the help we can get and we need to support each other any way we can I think.
g: True that. Last question for now: anything that you would recommend to a would-be archivist?
H: I think it’s a good idea to preserve what interests you, which seems to go without saying, but I mean it more from a sense of not only going after what is popular. While you might not get much fanfare initially for the more obscure stuff it’s likely you’ll be the only one doing it and it’s important it’s being done. If you do good work for long enough it will get noticed, and to make good work easier it’s best to go with what you’re passionate about. The other thing I would suggest is not beating yourself up or comparing your output to others. Do what you can when you want to, this is a hobby after all. If you make yourself miserable trying to do something your output will naturally suffer or you might even burn out and stop altogether. Like I said before, we need all the help we can get, so try to avoid that if at all possible.
g: Thank you for being here, overlord Hubz. It’s been good talking to you.
H: No problem! Thaks for the interview.
– glmdgrielson, being a very good minion interviewer
Desert Bus For Hope is a yearly charity stream, running under the premise that the more money that is received, the longer the stream goes on for, and the more the organizers have to play the dullest video game imaginable. So dull, in fact, that Desert Bus has never been officially released, actually. This year’s fundraiser gave us a stream that is just exactly an hour under one week: 6 days and 23 hours! So this was a very long stream with a lot of data to preserve. So follows the story of how that happens.
Note: DBx refers to the iteration of Desert Bus for Hope. For example, this year, 2021, was DB15. Also, I have only minimally modified our interview, by adding in links where applicable and making minor spelling corrections.
glmdgrielson: So first off, outside of the VST, what are you up to?
Sokar: I do video editing and Linux server security / software support, and various other (computer related) consulting things for “real work”.
g: So you started off with just the poster for DB6, according to the site, correct? How did that work?
S: We didn’t actually start doing the interactive postermaps till DB8, then I worked backwards to do all the previous ones (still not done).
The VST itself started formally during DB6.
g: That’s when Graham contacted MasterGunner, who presumably contacted you, correct?
S: Tracking the run live in some way was a confluence of ideas between me, Lady, and other members of the chat at the time, Graham knew how to get ahold of Gunner about making live edits because he was one of the people who helped with the DB5 torrent.
I honestly don’t remember how most of the DB6 VST crew was put together, it was very last minute.
g: Do you know anything about how that torrent was made?
S: The first DB5 torrent?
g: Yes.
S: Kroze (one of the chat mods) was physically at DB5 and brought a blank external HDD with him specifically for recording the entire stream, then after the run Fugi and dave_random worked together to create the torrect (with all the files split into 15min chunks) I wanna say the torrent file was initially distributed via Fugi’s server.
DB5 was the first time the entire run was successfully recorded.
LRR had previously toyed with the idea (DB3, but ended up doing clips instead) and steamcastle attempted to record all of DB4 but was unsuccessful.
g: And DB6 was the first year the VST existed. What was that first year like?
S: The first year was VERY short handed, we only had 14 people, a LOT of the “night” shifts were either just me by myself or me and BillTheCat
We really didn’t know what we were doing, the first rendition of the DB6 sheet didn’t even have end times for events.
There was just “Start Time” “Event Type” “Description” and “Video Link”.
At some point we (the VST) will just re-spreadsheet the entire run, because we were so short handed we missed a lot of things, when I went back to make the DB6 postermap I think I ended up uploading ~17(ish) new videos because that was how many posterevents weren’t even on the sheet.
g: What sort of equipment or software did you use back then?
S: We used google sheets (and still do, but not in the same way anymore), and then all the “editing” was done via Twitch’s Highlight system at the time, which then had a checkbox to auto upload the video to youtube.
Then there were a few people with youtube access that could enable monetization and other things like that.
Twitch’s Highlight editor (especially at the time we used it (DB6/DB7)) was extremely painful to use on very long VODs, there was no “seek by time”. You had to use the slider and kinda position it where you wanted and then just wait and be quick on the cut button.
We didn’t actually start capturing the run ourselves until Twitch’s overzealous VOD muting happened ( 2014-08-06 ) and we had to figure out a new way of doing things.
g: And just two years down the line, you had to start making your own tools. What was that like?
S: When that happened we had roughly 3 months to figure out what to do. dave_random put in a ton of time figuring out how to capture the run (using livestreamer which has since been forked to streamlink). The way it worked during DB8 was that the video would get uploaded to youtube with a couple of minutes on either side of the video, then the video editors would go in and edit the video using youtube’s editor.
Then we found out that there is a limit tied to youtube’s editor and you can only have a set number of videos “editing” at once, then you get locked out of the editor for a while, we (the VST and DesertBus in general) always end up being en edge case.
MasterGunner wrote the first version of our own editor so we could edit the video before it got sent to youtube.
The VST website itself also didn’t exist till DB9, a lot of the poster revisions archive only exists because J and myself kept copies of all the revisions.
g: After DB9 is when you started trying to backup the previous years, right?
S: Yea, so (internally) the VST had talked about archival problems over the years, and when Anubis169 went to DB9 (in person) to volunteer, he also went with the express purpose to grab as many of the Desert Bus files as he could find at the time.
When he got back home he and I went over the files he managed to get and he sent me a copy of everything he grabbed, I also spent the time trying to figure out how uStream had stored all the DB1 and DB2 clips then downloaded a copy of all of them.
It turned out to be a very good time to do that, since for a few years later IBM bought uStream and deleted all archives
g: So that looks to be all of the history questions I have. Now for the fun part: describe the process of archiving a Bus.
S: As in as it currently stands?
As in “how did this year work”?
g: Yes. How would the process of archival go as it currently stands?
S: well, that’s a hard one, haha
g: Not surprised, given the scope of the event we’re dealing with.
S: For old stuff: I already (also) flew to Victoria to get the missing DB3 and DB4 files, which was successful, the next time I go it will be to recover old prize data (I’m in the process or making a full prize archive)
For what we “regularly” capture setting up for a new run goes pretty much like this:
The current version of the wubloader (our capture architecture) (re-written by ekimekim, and chrusher after DB12) is used by ekim all year, so he reguarly workes on it and fixes it to work around anything twitch changes.
~3 months before the run we will put out the signup form to the internal VST place, a week or so after that it will be the IRC channel, and the LRR discord (in the desertbus channel)
During about 2 of those 3 months I’ll finish up any new stuff for the VST website I’m working on, so they are ready for the run.
The VST Org. Committee has meetings during the year to talk about any changes we want to make to any of the internal tools of our external facing stuff, the first of which usually happens in June for a new run.
Sorry, some of this is out of order.
g: You’re fine.
S: If we need to inform regular VST members of some major changes we’ve made we schedule meetings over some form of video chat for them to signup for and then to do a quick check over on everything new so we can get any questions answered and have everyone on the same page (usually about 30min per-session).
New people will get a separate training session that’s usually about 90-120 min in length, new people will always start off as “spreadsheeters”, we don’t rotate in new editors until they’ve been around for a couple years and they kind of have a feel for what we do.
For setting up the VST website for the run, there’s a separate “front page” for when the run is live, and also the head node is dropped back to being non-public and we stand up a 8-node globally located DNS cluster to handle the load, it runs on a 5 minute update cycle because late-run when there is a new poster revision a full update and sync takes about 3 & 1/2 minutes.
For setting up a “new year” on the VST site, there’s an amount of manual work, but it’s only about 3 hours or so, really depends on how many of the other things we track are setup at that point.
g: Other stuff being things like the charts, the clock, chat stats?
S: The clock is pretty easy, the chat stats require the chat capture be enabled and going, the graphs require that the donation capture is going already, so that can’t be setup till donations re-set, the gamejam page can’t be setup till Famout gets the gamejam on itch.io setup, the gameshows page can’t be setup till Noy2222 actually knows what gameshows he’s doing this year. The spreadsheet page can’t be setup until all the google docs spreadsheets are setupThe posters page requires that Lunsford has the poster that they’re drawing be setup somewhere for us to query. And the animated poster evolution page requires 3 poster revisions before that works at all. The postermap page is updated manually when I have time to draw/trace and then import the new postermap(ImageMap) of the poster Lunsford has drawn (still not done with this year’s yet)
For standing up our capture infastructure: There’s at minimum 2 nodes on “hardware” as in non-virtualized, that are “editing” nodes, only one of which actually uploads to the youtube channel, after that (usually) all the other nodes are virtualized and (this year) were provided by 6 different people, these are completely separate from the VST website nodes.
We also always try to make sure all the capture nodes are geographically distributed so a random network outage can’t hurt us, and so if one node misses a segment the other 7 can fill in the blank.
Once all of those are stood up and working, they’re all imported into the monitoring dashboard so we know if one of them has a problem. Usually we have all the capture (and website) hardware stood up about 1 week before the run starts. Then we have time to test it and ekimekim and chrusher (Wubloader), ElementalAlchemist (who coded the new version of thrimbletrimmer, our editor), and myself (website) have time to fix any bugs / finish any new features. At that point all the approved (new and old) VST members will also get an invite to the private sheet. Also, we invite any new VST members to the private chat space we use during the run (self-hosted Zulip).
We also spend a lot of time working on the schedule (as part of the signup form people tell us their available hours), people are limited to a max of 6 hour shifts, so scheduling ~60 people over a week where we try to maintain ~8 active people on the private spreadsheet is actually quite complex. ekimekim created a python script to create an initial rough guess, we then have a VST Org meeting to smooth things out. The resulting (schedule) spreadsheet is then given to everyone on the VST so they can check for errors in their personal schedule, and then (for during the run) the schedule’s csv is fed in to a zulip bot that announces who’s going on/off shift. Also, once I have the VST website nodes setup I give J access to one (geographically) near him, that he also uses for his own capture of the chat, twitch, and poster revisions, that way if the VST website head-node misses something we have a backup copy with the stuff J sets up as well.
I think that’s it, everything I’m thinking of now is post-run stuff. Oh, J also runs a capture of all of the Prize data that we preserve for the (upcoming) prize archive.
g: Well, that’s one heck of a process. Mind going into the tech used, like Wubloader and thrimbletrimmer?
S: Sure, wubloader is a ekimekim/chrusher coded Python3 project that is a custom HLS capture (as in we capture every 2-second long .ts segment twitch sends out when the stream is going). It uses PostgreSQL for backend databases, nginx for web, FFMPEG for doing the actual video editing, and docker for easier node deployment. It uses the GoogleDocs API for interaction with the private sheet and the YouTube API for uploading to youtube / managing the playlists.
Thrimbletrimmer (Now coded by ElementalAlchemist) uses HLS.js and a bunch of custom javascript and html for the editing interface, it can make multiple cuts (so we can cut the middle out of a video) and has the ability to add the chapter markers to the description if we want to do that on a longer video.
g: So the upload process is done by Thrimbletrimmer?
S: When someone makes an edit in Thrimbletrimmer, it talks to thrimshim (that then passes the actual edits on to the wubloader that then does the edit and uploads the video to youtube.
thrimshim is a piece of the wubloader that is kind of like an API to all the data in wubloader
so when a video is marked in the private sheet for upload there is a link to thrimbletrimmer that has a UUID on it, that thrimbletrimmer passes to thrimshim so it knows which video segments correspond to the requested video. On the way back it’s like “edit this uuid with the following edits, here’s the video title and description”
g: So what about the Twitch chat? How do you grab that?
S: Twitch chat is captured in 2 ways: via irssi (unix command line IRC client) both J and myself run a capture using that, and (this year) ekimekim coded up a capture for it that also captures all the meta-data for each chat message.
So before the run starts, J and I just setup our irssi sessions on 2 respective servers, and just leave them running in screen. ekimekim runs his custom capture off 2 of the wubloader nodes
g: So how has this setup evolved over time?
S: For chat capture or video capture?
g: Both.
S: Chat capture has largely been the same, old (pre-DB6) chat capture was just done with whoever made the capture’s IRC program (mIRC or IceChat).
Video capture has changed quite a bit, the first version of the wubloader (DB8) [coded by dave_random] was done with livestreamer (saved to mp4 files) and only did rough cuts, the 2nd version (DB9-12) came with Thrimbletrimmer (coded by MasterGunner) which did specific cuts, but also still used livestreamer as the capture source, During DB12 we discovered Twitch had implemented a “24-hour watch limit” which caused both capture nodes to miss part of Ash & Alex’s driver intro. Starting with DB13 ekimekim and Chrusher implemented a custom home-grown capture method that attaches directly to the HLS stream, and resets itself every so often to avoid the 24 hour watch limit.
The new capture metod saves all the 2-second long .ts files as they come out and each node fills in for any other node that got a partial or missed segment, now the capture nodes are a cluster instead of independent.
The editing process has gone from using twitch highlights -> using youtube’s editor -> using a custom editor coded by MasterGunner -> using a further improved editor coded by ElementalAlchemist.
Compared to using twitch or youtube’s editor the ones coded by MasterGunner and ElementalAlchemist are an amazing improvement, and much less buggy.
g: Anything else you want to add? Advice for somebody considering a similar archival project? Other than “don’t”?
S: Honestly: “Start on the first year of the event”, “Ask us (the VST) for advice”, “Preserve everything, backtracking to get something you missed is always more painful”
“Don’t try to do it by yourself”
The VST only works because of all the people involved and learning from the mistakes we’ve made over the years.
g: Any closing thoughts before I wrap up this interview?
S: All of this would never have happened if LoadingReadyRun wouldn’t have put “First Annual” on the website banner back in 2007 as a joke.
g: Thank you for your time!
– glmdgrielson, along for the eight hour, mind-numbingly dull drive
]]>URLTeam is an arm of Archive Team, solely dedicated to collecting shortened URLs.
It is unusual to see a long-term archiving or preservation project, once a collection or a grab is completed, that’s that. Yet URLTeam, who have taken on a task with no apparent end date, have endured for over 10 years, growing into a community in their own right.
Circa 2009, Scumola of Archive Team noted how shortened URLs had proliferated on a little website called Twitter. Twitter was then, and still is now, infamous for its character limitations. To free up space, users began sharing shortened links. And as other users discovered this trick, it only spread. This led to a paradigm shift in the web ecosystem. Links became a lot more unrecognizable, both to refers and referees.
Archivists too, were vexed. The traditional approach to web archiving had been to target a particular domain or subdomain, URLs which followed a pattern. Now how could they expect to save posts from blogs, forum threads or stories from news sites, if URLs were coming to them from TinyURL or bit.ly? Thus, URLTeam was born out of an effort to catalogue said short URLs.
Short URLs are loose connections. You cannot actually shorten the domain that someone else has registered, that is to say one cannot rename google
to gugle
.
The secret, that URL Shorteners employ, is to generate short URLs on YOUR OWN (or 3rd party) servers which can be made to redirect to longer URLs. When someone connects to your server, with the shortened address, you just redirect them to the associated full address in your database. So any time you visit a short link, you are going to be visiting (at least) two websites.
Old links dying or websites shutting down is a given. Yet adding more redirects is going to lengthen the chain to get to your final destination. As the saying goes, the chain is only as strong as the weakest link
. Should the URL shortening service shut down, then that means all short URLs will break, even if the actual site being redirected to is still online. So that is reason enough to hoard short URLs. That is what URLTeam does.
URLTeam’s approach is to decompose the problem. Even if we cannot possibly crawl every link shared on the internet, nor every final redirect, at any time there are going to be a relatively small number of URL shortening services. So URLTeam begins by hunting for said URL Shorteners. If we can recognize ow.ly
or goo.gl
links, that’s a start.
Once a new shortening service is added, the next step would be crawling the web for short URLs. Since URL shorteners are almost universally linkable on any site, this is going to have to be a very broad crawl, akin to a crawl for building a search engine. This brings us to the URLTeam Tracker which oversees the distribution of the crawling jobs to hundreds of volunteering archivists. You can even see live stats on their page!
Collected links are finally shipped to the 301works collection on the Internet Archive. The 301works collection also houses link databases donated by URL shortening services, so if you happen to own a URL shortening service at risk of shutting down you might want to get in contact with them.
Communication happens on #[email protected]. General info can be found on their Wiki page.
If you want to hop right in, you can simply download and run an Archive Team Warrior, and then select the URLTeam
project. You can also run the project using a Docker containter by following the instructions on here.
Now if you excuse me, I have to cut this short so I can cut some shortcuts.
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our past Community Spotlights.
]]>“Every artist has thousands of bad drawings in them and the only way to get rid of them is to draw them out.”
Chuck Jones
Any artistic product will have early drafts or scrapped ideas missing from the final version. In the film industry, The Cutting Room Floor refers to a hypothetical space where all the unused footage for a movie is dumped.
True to the namesake, The Cutting Room Floor is an online community who collect cut or unused content in video games, instead of movies. Assets hidden in inaccessible locations or the code itself, shortcuts used by the developers, unusual easter eggs, incomplete levels — you name it!
The Cutting Room Floor primarily use their MediaWiki as a database to publish and/or document unused, unreleased or incomplete video game content. For a sample of TCRF’s discoveries, just taka a look at their “Did You Know…” section. Did you know that Dragon’s Lair for the Amiga actually has a hidden message to discourage people from cracking the game?
For one thing data mining hidden content is a skill in its own right. While this is often an individual task it’s not an isolated one, TCRF provide thorough guides on how to get started and best practices for reporting new discoveries.
In cases where content is removed from the final release, TCRF work together with their sibling community The Hidden Palace who collect video game prototypes.
Consider registering on the Wiki, there’s plenty of to-dos to fill out! TCRF is spread across a number of platforms, but are currently most active on Discord.
Give the Cutting Room Floor a visit, today! And CUT!
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our past Community Spotlights.
]]>The Hidden Palace is a group of video game preservationists who hunt down video game prototypes, cut features and other game development media. You could say they are out to find out how games change throughout their development cycle, what elements and mechanics actually make it to the final product.
The name comes from the Hidden Palace Zone from Sonic 2, an unused area in the original release.
For about 15 years, the Hidden Palace has amassed a collection of over 1000 development builds for various games on a multitude of systems. A good portion of these have been mirrored on the Internet Archive, where you can try them out for yourself via MAME.
More than that, the Hidden Palace is about analyzing differences between different game builds, that is to say different stages of development. The Hidden Palace also frequently cooperates with The Cutting Room Floor to document features which have been cut from the final release. Take a look at their recent joint-update on the elusive Sonic 1 Mega Drive Prototype!
It’s one thing to hunt down an obscure product, but where do you find a game that was never really released? Good candidates are developers or testers who may have had access to earlier versions of the game. Next come hobbyists or other preservationists who may have acquired a prototype from the above options. At this stage it’s likely that a prototype will go up for auction.
In any case, contributors to the Hidden Palace ship prototypes they have come into possession of, so that they may be dumped and/or scanned. If you would be interested in contributing yourself, get in touch with [email protected] and also have a look at their contribution page (they really value your confidentiality).
Even if you can’t travel to the Hidden Palace’s preservation studio for yourself, there is a lot you can do to help. Just join the Wiki, there’s plenty of to-dos to fill out!
Or if you would rather prefer talking and meeting with people, perhaps the Hidden Palace Discord Server is for you!
Then what are you waiting for? Go forth, and discover the next Hidden Palace Zone, today!
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our other Community Spotlights.
]]>Dead Game News (DGN) is a group dedicated to reporting games which are no longer available to consumers, or which are under the risk of becoming unavailable. Games that are dying or dead, as it were.
DGN is rather unique among preservation communities, being geared towards recent or ongoing events. You can expect them to report offline games being delisted, or servers shutting down for multiplayer games.
Beyond reporting dying games they also work to spread awareness on issues in the games industry which hurt to lifespan of many games. These include addressing pitfalls in DRM (Digital Rights Management) tools and “games as a service” practices.
They are most active on their Discord server which is a hub for exchanging news and fostering discussion.
Occasionally they might tweet notable dying games on their Twitter account. Rarely, you might see a DGN video on Accursed Farms, where DGN first originated.
Just hop on board their Discord Server! Or if you would like to just follow the most important headlines give @deadgamenews a follow.
So what are you waiting for? Become a Game Mortician, today!
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our other Community Spotlights.
]]>Now, they aren’t just uploading the games online (like that’s a worry). Like with our overlords Gaming Alexandria, they focus on things like promotional material or related memorabilia.
For example, here is a little comic based on the days when it was Madou Monogatari. Wish I understood a lick of what it says.
If this sort of thing sounds interesting to you, there is a page that links to their archive as well as things that they need to get done.
Note: the team over there is having some tough times and could use a little help.
– glmdgrielson
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our other Community Spotlights.
]]>The International Internet Preservation Consortium, or IIPC for short, is an organization dedicated to the preservation of historically and culturally significant websites. Their members include libraries, archives, museums and educational institutions from all across the world.
The IIPC is a forum, a place where all members can share their knowledge and expertise. In addition, they also develop new ideas, inventing new tools of the trade. For instance, the WARC file format which came out of the IIPC is now widely used for saving downloadable snapshots of websites.
Secondly, the IIPC strives to make archiving a lot more friendly. To that end, they work to lower the high entry barrier through training programs. They hold workshops and seminars to educate archivists on the latest digitization technologies. All the while, they spread consciousness of why preserving the history of the net is so important.
The IIPC is split into a number of working groups. Each of these groups manages projects and tasks related to a specific branch of archiving. The current working groups are:
In addition, members might opt to host their own projects outside of the working groups. One such past project was TwitterVane, a project for archiving Twitter activity, headed by the British Library. Furthermore, members can also apply for funding on their projects, under the IIPC’s Funding Program.
Finally, the IIPC has many events throughout the year; workshops, seminars and the like. Do note, these events are not exclusive to members, you can attend some of these on Zoom.
Well this is a bit of a loaded question, since IIPC members are not individuals. However, if you, as a member of a group which may be eligible for the IIPC would like to enroll your institution, would like to join the IIPC you can apply for membership on their website http://netpreserve.org/join-iipc/.
In case you are not an affiliate of such an institution, but would still like to get involved, you can check out their events. If you are a developer, you might be able to contribute to their GitHub repos.
So what are you waiting for? Consort with the IIPC today!
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our other Community Spotlights.
]]>It’s all in the name, fanlore is an extensive lore of derivative works made by fans, an encyclopedia of fan works! Fanfiction, Fanart, Filks you name it! Fanlore is a wiki which operates under the Organization for Transformative Works.
What do they do?
Art History is considered a core discipline in the Humanities. Fanlore takes itself very seriously in that they try to cover what is a very much neglected portion of the History of Modern Art.
Fanlore doesn’t only document online/offline fan works, but also critically analyses these. They build timelines, codify tropes and research bibliographic information on authors or artists who might have been deemed “lacking in notoriety” for an actual encyclopedia.
How do they do it?
Most of this activity takes place on a MediaWiki, namely Fanlore Wiki. They currently sport a whopping 52,017 articles, 940,737 edits.
How do I sign up?
Though fanlore is technically a project under the umbrella of the OTW, OTW membership or a similar position is not necessary to join, anyone who’s willing to edit a wiki is a potential member.
So what are you waiting for? Become a lore keeper today!
Looking to discover other archiving communities? Just follow Data Horde’s Twitter List and check out our other Community Spotlights.
]]>