Interviews – Data Horde https://datahorde.org Join the Horde! Mon, 18 Apr 2022 16:19:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://datahorde.org/wp-content/uploads/2020/04/cropped-DataHorde_Logo_small-32x32.png Interviews – Data Horde https://datahorde.org 32 32 Interview with Hubz of Gaming Alexandria https://datahorde.org/interview-with-hubz-of-gaming-alexandria/ https://datahorde.org/interview-with-hubz-of-gaming-alexandria/#respond Mon, 18 Apr 2022 09:09:30 +0000 https://datahorde.org/?p=2719 Hello, here’s another interview, this time with our head overlord Hubz of Gaming Alexandria.

glmdgrielson: So, first question, what is Gaming Alexandria?
Hubz: At it’s core it’s both a Discord community and a separate website dedicated to preserving various aspects of video games, such as scans, interviews, unreleased games, youtube videos etc. It mainly started as a site where I could share high quality scans but has grown thanks to many people joining up with various skills to help expand the website. The Discord community itself is really an entity unto itself at this point where lots of gaming historians/preservationists have come together to share their works and also help each other out when needed with various projects. I love getting to see all the passion in everybody’s projects that they put forth and the willingness of the community to offer help when asked.

g: Tell me more about this community. I’m active in the server, but what does it look like from your end?
H: From an admin standpoint I have access to all the channels which include the private #staff and #mods channels where we discuss upcoming articles or projects for the site as well as handling the occasional argument or bad apple in the chat. Dylan Mansfeld (DillyDylan) handles a lot of great articles on undumped/prototype games that were previously unreleased. Ethan Johnson writes for his own blog (https://thehistoryofhowweplay.wordpress.com/) and Gaming Alexandria at times and is our editor so he glances through and cleans up all the articles that get posted. Jonas Rosland who is the Executive Director of the NPO, I’m a board member of, called Hit Save (https://hitsave.org/) does a lot of thankless technical work behind the scenes that includes a NAS he has setup for not only the staff of the website to store project files but the community at large which is a huge help. Wietse van Bruggen (Densy) handles a lot of the moderation of the chat and has been a huge help keeping the Discord community friendly and clean with his balanced moderation style. Last but not least there is Stefan Gancer (Gazimaluke) who did the original site redesign and has been a great idea man for ways to improve the site and community as time has gone on. For me personally I try to keep up with all the chat in the channels (though it can be tough at times!) just to have an idea of what’s going on and seeing what I can help with or connect people to further projects as well as post my scans and projects as they’re completed. Thanks to the rest of the staff I rarely have to step in and moderate which is very nice!

g: I’m going to skip over the omission of Norm and ask about the history of how the site has evolved.
H: LOL yes Norm is a menace to society and must be stopped.

Editor’s note: Hubz has a mock rivalry with Norm, a.k.a. the Gaming Historian and is a frequent running gag on the server. I do not believe there is actual malice.

The website itself started officially on October 23rd, 2015 and was just a basic text website that I could easily upload to in order to share my scans, it was very barebones. The reason I wanted to get high quality scans out was due to using an emulator frontend called Hyperspin. For popular systems it had a lot of decent quality artwork for boxes. But for lesser known systems it was sorely lacking and that triggered my OCD and made be realize that scanning stuff in high resolution was something that needed to be done. Slowly, but surely, I met others that wanted to scan in high quality and have their stuff hosted and they would submit stuff such as Densy. At some point I got involved with the VGPC discord and met Kirkland who had been quietly doing something similar with his collection and collaborated with him and others on establishing scanning standards to use going forward to have some level of consistent quality with those that were willing to do it which eventually led to what is the https://scanning.guide/. In late 2018 the site was graciously redone by Gazimaluke and relaunched in the design you see now. We started branching out into actual articles written by our staff and releasing prototypes and unreleased games that we came across. The site continues doing this to this day, though we are branching out into more guest authors from the community posting interviews and articles as well in the near future.

g: As well as hosting my site, for which I am grateful for. So, what is the day to day like for you?
H: Day to day on the scanning I try to get at least one magazine done daily. Doesn’t always happen but, in general, I debind a magazine the night before, then in the morning scan it in before leaving for work. If work gets slow I work on processing the scans, or else I’ll do it later that night and get them uploaded to the site and the Internet Archive.

g: Interesting. So how big do you think your archive is by this point?
H: Archive upload-wise I’m probably right around 2900 items if you count stuff that was removed lol. Then there’s a bunch on the site that wasn’t done to the higher scanning standards I go by now that’s not on the archive. So I’d guess in the 3000-4000 item range currently.

g: Do you know how big it is in terms of filesize?
H: Let me see real quick…
Looks like 2.5TB which is another reason I’m so thankful to have the Internet Archive to host my scans on due to the space and bandwidth that would be required otherwise.
The site alone usually has about half a TB of traffic per month so I can only imagine what it would be like if the magazine scans were also hosted directly on it.

g: Neat. Is there anything interesting that you got to be a part of due to GA that you would like to share?
H: Biggest thing is probably working with The Video Game History Foundation on scanning their extensive magazine collection so digital copies can be provided along with physical copies at their library. Being able to leverage the Internet Archive so people all over the world can easily access the magazines I’ve scanned that they might not have been able to easily otherwise is a great feeling personally for me. So many of these things are quite difficult to acquire and expensive as time goes on so having them as an ally in the preservation world is a godsend. There’s been lots of other connections and other projects I’ve worked on as well but I won’t ramble forever on that. Not only is Gaming Alexandria a tight community that likes to help each other out but there’s plenty of other preservation groups like VGHF, TCRF, and Hidden Palace just to name a few and we all get along great and try to push preservation forward together.
There’s so much work that needs to be done that we need all the help we can get and we need to support each other any way we can I think.

g: True that. Last question for now: anything that you would recommend to a would-be archivist?
H: I think it’s a good idea to preserve what interests you, which seems to go without saying, but I mean it more from a sense of not only going after what is popular. While you might not get much fanfare initially for the more obscure stuff it’s likely you’ll be the only one doing it and it’s important it’s being done. If you do good work for long enough it will get noticed, and to make good work easier it’s best to go with what you’re passionate about. The other thing I would suggest is not beating yourself up or comparing your output to others. Do what you can when you want to, this is a hobby after all. If you make yourself miserable trying to do something your output will naturally suffer or you might even burn out and stop altogether. Like I said before, we need all the help we can get, so try to avoid that if at all possible.

g: Thank you for being here, overlord Hubz. It’s been good talking to you.
H: No problem! Thaks for the interview. 🙂

– glmdgrielson, being a very good minion interviewer

]]>
https://datahorde.org/interview-with-hubz-of-gaming-alexandria/feed/ 0
Stuck in the Desert, or Video Strike Team https://datahorde.org/stuck-in-the-desert-or-video-strike-team/ https://datahorde.org/stuck-in-the-desert-or-video-strike-team/#respond Mon, 28 Feb 2022 17:22:35 +0000 https://datahorde.org/?p=2707 This is an interview with Sokar, of the Video Strike Team, conducted over IRC. The VST is an archival group of a rather small scope: preserving a particular stream, Desert Bus For Hope.

Desert Bus For Hope is a yearly charity stream, running under the premise that the more money that is received, the longer the stream goes on for, and the more the organizers have to play the dullest video game imaginable. So dull, in fact, that Desert Bus has never been officially released, actually. This year’s fundraiser gave us a stream that is just exactly an hour under one week: 6 days and 23 hours! So this was a very long stream with a lot of data to preserve. So follows the story of how that happens.

Note: DBx refers to the iteration of Desert Bus for Hope. For example, this year, 2021, was DB15. Also, I have only minimally modified our interview, by adding in links where applicable and making minor spelling corrections. 

glmdgrielson: So first off, outside of the VST, what are you up to?

Sokar: I do video editing and Linux server security / software support, and various other (computer related) consulting things for “real work”.

g: So you started off with just the poster for DB6, according to the site, correct? How did that work?

S: We didn’t actually start doing the interactive postermaps till DB8, then I worked backwards to do all the previous ones (still not done).
The VST itself started formally during DB6.

g: That’s when Graham contacted MasterGunner, who presumably contacted you, correct?

S: Tracking the run live in some way was a confluence of ideas between me, Lady, and other members of the chat at the time, Graham knew how to get ahold of Gunner about making live edits because he was one of the people who helped with the DB5 torrent.
I honestly don’t remember how most of the DB6 VST crew was put together, it was very last minute.

g: Do you know anything about how that torrent was made?

S: The first DB5 torrent?

g: Yes.

S: Kroze (one of the chat mods) was physically at DB5 and brought a blank external HDD with him specifically for recording the entire stream, then after the run Fugi and dave_random worked together to create the torrect (with all the files split into 15min chunks) I wanna say the torrent file was initially distributed via Fugi’s server.
DB5 was the first time the entire run was successfully recorded.
LRR had previously toyed with the idea (DB3, but ended up doing clips instead) and steamcastle attempted to record all of DB4 but was unsuccessful.

g: And DB6 was the first year the VST existed. What was that first year like?

S: The first year was VERY short handed, we only had 14 people, a LOT of the “night” shifts were either just me by myself or me and BillTheCat
We really didn’t know what we were doing, the first rendition of the DB6 sheet didn’t even have end times for events.
There was just “Start Time” “Event Type” “Description” and “Video Link”.
At some point we (the VST) will just re-spreadsheet the entire run, because we were so short handed we missed a lot of things, when I went back to make the DB6 postermap I think I ended up uploading ~17(ish) new videos because that was how many posterevents weren’t even on the sheet.

g: What sort of equipment or software did you use back then?

S: We used google sheets (and still do, but not in the same way anymore), and then all the “editing” was done via Twitch’s Highlight system at the time, which then had a checkbox to auto upload the video to youtube.
Then there were a few people with youtube access that could enable monetization and other things like that.
Twitch’s Highlight editor (especially at the time we used it (DB6/DB7)) was extremely painful to use on very long VODs, there was no “seek by time”. You had to use the slider and kinda position it where you wanted and then just wait and be quick on the cut button.
We didn’t actually start capturing the run ourselves until Twitch’s overzealous VOD muting happened ( 2014-08-06 ) and we had to figure out a new way of doing things.

g: And just two years down the line, you had to start making your own tools. What was that like?

S: When that happened we had roughly 3 months to figure out what to do. dave_random put in a ton of time figuring out how to capture the run (using livestreamer which has since been forked to streamlink). The way it worked during DB8 was that the video would get uploaded to youtube with a couple of minutes on either side of the video, then the video editors would go in and edit the video using youtube’s editor.
Then we found out that there is a limit tied to youtube’s editor and you can only have a set number of videos “editing” at once, then you get locked out of the editor for a while, we (the VST and DesertBus in general) always end up being en edge case.
MasterGunner wrote the first version of our own editor so we could edit the video before it got sent to youtube.
The VST website itself also didn’t exist till DB9, a lot of the poster revisions archive only exists because J and myself kept copies of all the revisions.

g: After DB9 is when you started trying to backup the previous years, right?

S: Yea, so (internally) the VST had talked about archival problems over the years, and when Anubis169 went to DB9 (in person) to volunteer, he also went with the express purpose to grab as many of the Desert Bus files as he could find at the time.
When he got back home he and I went over the files he managed to get and he sent me a copy of everything he grabbed, I also spent the time trying to figure out how uStream had stored all the DB1 and DB2 clips then downloaded a copy of all of them.
It turned out to be a very good time to do that, since for a few years later IBM bought uStream and deleted all archives

g: So that looks to be all of the history questions I have. Now for the fun part: describe the process of archiving a Bus.

S: As in as it currently stands?
As in “how did this year work”?

g: Yes. How would the process of archival go as it currently stands?

S: well, that’s a hard one, haha

g: Not surprised, given the scope of the event we’re dealing with.

S: For old stuff: I already (also) flew to Victoria to get the missing DB3 and DB4 files, which was successful, the next time I go it will be to recover old prize data (I’m in the process or making a full prize archive)
For what we “regularly” capture setting up for a new run goes pretty much like this:
The current version of the wubloader (our capture architecture) (re-written by ekimekim, and chrusher after DB12) is used by ekim all year, so he reguarly workes on it and fixes it to work around anything twitch changes.
~3 months before the run we will put out the signup form to the internal VST place, a week or so after that it will be the IRC channel, and the LRR discord (in the desertbus channel)
During about 2 of those 3 months I’ll finish up any new stuff for the VST website I’m working on, so they are ready for the run.
The VST Org. Committee has meetings during the year to talk about any changes we want to make to any of the internal tools of our external facing stuff, the first of which usually happens in June for a new run.
Sorry, some of this is out of order.

g: You’re fine.

S: If we need to inform regular VST members of some major changes we’ve made we schedule meetings over some form of video chat for them to signup for and then to do a quick check over on everything new so we can get any questions answered and have everyone on the same page (usually about 30min per-session).
New people will get a separate training session that’s usually about 90-120 min in length, new people will always start off as “spreadsheeters”, we don’t rotate in new editors until they’ve been around for a couple years and they kind of have a feel for what we do.
For setting up the VST website for the run, there’s a separate “front page” for when the run is live, and also the head node is dropped back to being non-public and we stand up a 8-node globally located DNS cluster to handle the load, it runs on a 5 minute update cycle because late-run when there is a new poster revision a full update and sync takes about 3 & 1/2 minutes.
For setting up a “new year” on the VST site, there’s an amount of manual work, but it’s only about 3 hours or so, really depends on how many of the other things we track are setup at that point.

g: Other stuff being things like the charts, the clock, chat stats?

S: The clock is pretty easy, the chat stats require the chat capture be enabled and going, the graphs require that the donation capture is going already, so that can’t be setup till donations re-set, the gamejam page can’t be setup till Famout gets the gamejam on itch.io setup, the gameshows page can’t be setup till Noy2222 actually knows what gameshows he’s doing this year. The spreadsheet page can’t be setup until all the google docs spreadsheets are setupThe posters page requires that Lunsford has the poster that they’re drawing be setup somewhere for us to query. And the animated poster evolution page requires 3 poster revisions before that works at all. The postermap page is updated manually when I have time to draw/trace and then import the new postermap(ImageMap) of the poster Lunsford has drawn (still not done with this year’s yet)
For standing up our capture infastructure: There’s at minimum 2 nodes on “hardware” as in non-virtualized, that are “editing” nodes, only one of which actually uploads to the youtube channel, after that (usually) all the other nodes are virtualized and (this year) were provided by 6 different people, these are completely separate from the VST website nodes.
We also always try to make sure all the capture nodes are geographically distributed so a random network outage can’t hurt us, and so if one node misses a segment the other 7 can fill in the blank.
Once all of those are stood up and working, they’re all imported into the monitoring dashboard so we know if one of them has a problem. Usually we have all the capture (and website) hardware stood up about 1 week before the run starts. Then we have time to test it and ekimekim and chrusher (Wubloader), ElementalAlchemist (who coded the new version of thrimbletrimmer, our editor), and myself (website) have time to fix any bugs / finish any new features. At that point all the approved (new and old) VST members will also get an invite to the private sheet. Also, we invite any new VST members to the private chat space we use during the run (self-hosted Zulip).

We also spend a lot of time working on the schedule (as part of the signup form people tell us their available hours), people are limited to a max of 6 hour shifts, so scheduling ~60 people over a week where we try to maintain ~8 active people on the private spreadsheet is actually quite complex. ekimekim created a python script to create an initial rough guess, we then have a VST Org meeting to smooth things out. The resulting (schedule) spreadsheet is then given to everyone on the VST so they can check for errors in their personal schedule, and then (for during the run) the schedule’s csv is fed in to a zulip bot that announces who’s going on/off shift. Also, once I have the VST website nodes setup I give J access to one (geographically) near him, that he also uses for his own capture of the chat, twitch, and poster revisions, that way if the VST website head-node misses something we have a backup copy with the stuff J sets up as well.
I think that’s it, everything I’m thinking of now is post-run stuff. Oh, J also runs a capture of all of the Prize data that we preserve for the (upcoming) prize archive.

g: Well, that’s one heck of a process. Mind going into the tech used, like Wubloader and thrimbletrimmer?

S: Sure, wubloader is a ekimekim/chrusher coded Python3 project that is a custom HLS capture (as in we capture every 2-second long .ts segment twitch sends out when the stream is going). It uses PostgreSQL for backend databases, nginx for web, FFMPEG for doing the actual video editing, and docker for easier node deployment. It uses the GoogleDocs API for interaction with the private sheet and the YouTube API for uploading to youtube / managing the playlists.
Thrimbletrimmer (Now coded by ElementalAlchemist) uses HLS.js and a bunch of custom javascript and html for the editing interface, it can make multiple cuts (so we can cut the middle out of a video) and has the ability to add the chapter markers to the description if we want to do that on a longer video.

g: So the upload process is done by Thrimbletrimmer?

S: When someone makes an edit in Thrimbletrimmer, it talks to thrimshim (that then passes the actual edits on to the wubloader that then does the edit and uploads the video to youtube.
thrimshim is a piece of the wubloader that is kind of like an API to all the data in wubloader
so when a video is marked in the private sheet for upload there is a link to thrimbletrimmer that has a UUID on it, that thrimbletrimmer passes to thrimshim so it knows which video segments correspond to the requested video. On the way back it’s like “edit this uuid with the following edits, here’s the video title and description”

g: So what about the Twitch chat? How do you grab that?

S: Twitch chat is captured in 2 ways: via irssi (unix command line IRC client) both J and myself run a capture using that, and (this year) ekimekim coded up a capture for it that also captures all the meta-data for each chat message.
So before the run starts, J and I just setup our irssi sessions on 2 respective servers, and just leave them running in screen. ekimekim runs his custom capture off 2 of the wubloader nodes

g: So how has this setup evolved over time?

S: For chat capture or video capture?

g: Both.

S: Chat capture has largely been the same, old (pre-DB6) chat capture was just done with whoever made the capture’s IRC program (mIRC or IceChat).
Video capture has changed quite a bit, the first version of the wubloader (DB8) [coded by dave_random] was done with livestreamer (saved to mp4 files) and only did rough cuts, the 2nd version (DB9-12) came with Thrimbletrimmer (coded by MasterGunner) which did specific cuts, but also still used livestreamer as the capture source, During DB12 we discovered Twitch had implemented a “24-hour watch limit” which caused both capture nodes to miss part of Ash & Alex’s driver intro. Starting with DB13 ekimekim and Chrusher implemented a custom home-grown capture method that attaches directly to the HLS stream, and resets itself every so often to avoid the 24 hour watch limit.
The new capture metod saves all the 2-second long .ts files as they come out and each node fills in for any other node that got a partial or missed segment, now the capture nodes are a cluster instead of independent.
The editing process has gone from using twitch highlights -> using youtube’s editor -> using a custom editor coded by MasterGunner -> using a further improved editor coded by ElementalAlchemist.
Compared to using twitch or youtube’s editor the ones coded by MasterGunner and ElementalAlchemist are an amazing improvement, and much less buggy.

g: Anything else you want to add? Advice for somebody considering a similar archival project? Other than “don’t”?

S: Honestly: “Start on the first year of the event”, “Ask us (the VST) for advice”, “Preserve everything, backtracking to get something you missed is always more painful”
“Don’t try to do it by yourself”
The VST only works because of all the people involved and learning from the mistakes we’ve made over the years.

g: Any closing thoughts before I wrap up this interview?

S: All of this would never have happened if LoadingReadyRun wouldn’t have put “First Annual” on the website banner back in 2007 as a joke.

g: Thank you for your time!

– glmdgrielson, along for the eight hour, mind-numbingly dull drive

]]>
https://datahorde.org/stuck-in-the-desert-or-video-strike-team/feed/ 0
Kill Screen: The Namco Catalog IP Rescue! Interview with No Context Pacman https://datahorde.org/kill-screen-interview-with-no-context-pacman-on-the-namco-ip-catalog-rescue/ https://datahorde.org/kill-screen-interview-with-no-context-pacman-on-the-namco-ip-catalog-rescue/#respond Sat, 18 Apr 2020 14:00:00 +0000 https://datahorde.org/?p=303 The Namco Catalog IP was a program launched by Bandai Namco, which allowed students and developers to use their intellectual property in making their games. In the short 4 (or maybe 5) years this project was active, a lot of interesting games came out of it. Unfortunately, it didn’t turn out as popular as was anticipated and it slowly faded into obscurity.

A lot of these games had already gone offline. However, what few games that remained online had a license that was due to expire last month, in March of 2020.

The Twitter account @nocontextpacman led a successful initiative to spread the word about these games and alert internet archiving communities to rescue whatever could be saved. Below is an interview we conducted with RyanSil, the man behind the Pac, in charge of running the account:

TheMadProgramer: Can you give us a brief history of the No Context Pac-Man account, I think it was originally started by Mr. McScrewup? You’re the second owner, correct?

RyanSil: Correct. He started the thing in August 2018; as he was on the verge of stopping, I stepped in with an account of my own to continue the run. This was created in June 2019.

NCPM isn’t anything quite out of the ordinary of the Twitterverse. It’s just a funny little account for Pac-Man memes and media. Plenty of No Context accounts exist on Twitter themed around differing IPs and brands. Not that they’re affiliated with them.

TheMadProgramer: Yeah, I have seen quite a lot of them myself, I think you’ve referred a lot of people back to No Context Klonoa for trying to sneak Klonoa in as a submission, because of his hat?

RyanSil: Nah, I just thought I would bring up Klonoa.

TheMadProgramer: So how’d you find out about the Namco Catalog IP games, in the first place? I think you did a stream or two?

RyanSil: I’ve known of them for at least a couple of years. I think as I was looking through Pac-Man’s mobile catalog, I came across some unique ones that didn’t officially release in the US. Hadn’t really thought too much more about how they’ve come to be until as I was deep into running NCPM.

TheMadProgramer: I noticed you held a level designing contest for that one game which allowed you to “mash levels together” on the Discord server at some point?

RyanSil: Yes. I’ve obtained a couple of Steam keys for Pac-Man: Championship Edition DX. And I thought it would be fun to give Pac-Man Ghost & Stage Maker a slight boost if I held a level-building contest for it.

TheMadProgramer: Did you get enough submissions to be able to call it a slight boost?

RyanSil: Eh, probably not. But it was worth a try.

TheMadProgramer: So let’s get to what really netted these games so much attention then… The expiration of their license. Now I’ve checked the website myself and the announcement was all the way back from either last year or January I think?¹ Why’d it take so long for people to notice that the deadline was so close?

RyanSil: Frankly, I didn’t even notice there was an expiration until someone that follows these things tweeted about it, and it found its way into my Twitter timeline. Catalog IP simply isn’t a popular project as far as customer attraction goes. Because it was kept to a Japanese audience, without much in the way of promotion.

TheMadProgramer: I did hear about efforts to bring it overseas, no luck there though huh?

RyanSil: I was looking at Pac-Man’s Nippon Journey’s Google Play page, and its downloads were labeled at 10+. That’s a small amount of people that bought it. Free apps got some more exposure, but those installs were more in the thousands. Which still isn’t huge when you consider the numbers IPs like Pac-Man can pull in.

Pac-Man’s Nippon Journey Promotional Artwork

Anyway, yea. Supposedly, there were lots of legal complications in translating Catalog IP to the US. Which is a great shame, because I would’ve loved to have played a role in it. It probably could’ve allowed for even more interesting games to be brought to the public by a variety of Western devs.

TheMadProgramer: Yeah certainly, seeing how many rip-offs there are as is.

RyanSil: Well, it’s less about making just another Pac-Man and more about taking the characters and putting them into a game that only you would have thought of creating.

There were a couple of games in the project that tried being Pac-Man, but they weren’t just Pac-Man. One of them had you experience it in VR. The other had mobile esports connections.

Gameplay for a Catalog IP game that was lost prior to the efforts this year: Pac-Tune
Gameplay by YouTube channel Appliv Games

The rest of the Pac-Man games threw the characters into fresh games altogether. One, for example, was a puzzler where you use Tetrominos to build as high a path for Pac-Man as you can without messing up or accidentally crushing him.

TheMadProgramer: PC, VR and mobile… they really weren’t restricted to any platform.

RyanSil: They weren’t, but PC and mobile seemed to have been the most they’ve covered.

TheMadProgramer: Anything interesting on console?

RyanSil: Assets of Namco IPs were released for SmileBasic on Nintendo 3DS – In Japan, of course. There were also Japanese RPG Maker MV characters and assets players can use. Which were used in games like Nippon Journey. That’s pretty much it.

TheMadProgramer: So then, would you care to help me solve a mystery? I just checked your Tweet from March 24th, which currently has over 300 retweets.² In terms spreading the word about these games, just how do you think you managed to accomplish what you couldn’t in months, in the span of a few short days?

RyanSil: Well, even the most popular tweets don’t change the scene much. So while I may have exposed more people to it, there’s always going to be way more that are unaware of the games at all. You know what they say – “You only get 15 minutes of fame”

TheMadProgramer: I suppose that has a bit to do with how Game Preservation and Game Journalism have become very “discrete” niches.

RyanSil: Right.

TheMadProgramer: That being said, is there anyone you’d like to give a shoutout to? I think Pacman’s Park grabbed a bunch of the Android games?

RyanSil@PacmansPark did a swell job rounding them up. Ernesto Aguirre also had a great hand in it by providing multiple APK’s for plenty of games as well as the iOS stuff. I also gotta thank the Flashpoint guys for bringing the HTML5 stuff from the Yahoo games portal to the program and making the region-locked games playable regardless of location.

TheMadProgramer: It’s fortunate people were able to move so quickly, even though quite a few of the games have survived long beyond what was feared to be the deadline.

Out of curiosity did you get any feedback from developers who had previously worked on these games? Granted I’d think language barriers would make that a bit difficult.

RyanSil: Pac-Man Ghost & Stage Maker managed to survive, surprisingly enough. As did four HTML5 games on the Yahoo portal. However, the rest has vanished. At least from official stores. Of course you can find at least some of them on third party sites and the like. But the archives are great since they rope em all in one spot.

No developers have contacted me about it. I’m not something that would quickly pop up on their radar

TheMadProgramer: Well then that’s that. Any future projects in the works? I hear you’re also co-hosting an art collab?

RyanSil: That’s Jackie’s (@x_khou) doing. She’s been handling it since last year. My server for NCPM merely became a new host for it. There isn’t too much I have in mind to discuss here, but I did animate a few second of the Don’t Hug Me I’m Scared Re-Animated collab, and I’m continuing to curate Flash and Shockwave games for Flashpoint.

TheMadProgramer: Ok, then. Thank you very much for taking your time.

RyanSil: You’re more than welcome

TheMadProgramer: Best of luck with everything…

Notes:

¹ As it turns out the announcement I saw was for the removal of content on Niconicommons the year before (same time last year, 31st of March 2019) http://blog.niconicommons.jp/2019/01/post-169.html
² At the time of conducting this interview (14th April 2020).

]]>
https://datahorde.org/kill-screen-interview-with-no-context-pacman-on-the-namco-ip-catalog-rescue/feed/ 0