In Which I Tell You It’s A Good Idea To Support a Magazine-Scanning Patreon

So, Mark Trade and I have never talked, once.

All I know about Mark is that due to his efforts, over 200 scans of magazines are up on the Archive.

headlock

These are very good scans, too. The kind of scans that a person looking to find a long-lost article, verify a hard-to-grab fact, or needs to pass along to others a great image would kill to have. 600 dots per inch, excellent contrast, clarity, and the margins cut just right.

cd-rom_today_05_aprmay_1994_0036

So, I could fill this entry with all the nice covers, but covers are kind of easy, to be frank. You put them face down on the scanner, you do a nice big image, and then touch it up a tad. The cover paper and the printing is always super-quality compared to the rest, so it’ll look good:

cd-rom_today_05_aprmay_1994_0000

But the INSIDE stuff… that’s so much harder. Magazines were often bound in a way that put the images RIGHT against the binding and not every magazine did the proper spacing and all of it is very hard to shove into a scanner and not lose some information. I have a lot of well-meaning scans in my life with a lot of information missing.

But these…. these are primo.

pcgames_01_fall_1988_0011

pcgames_01_fall_1988_0012

pcgames_01_fall_1988_0073

When I stumbled on the Patreon, he had three patrons giving him $10 a month. I’d like it to be $500, or $1000. I want this to be his full-time job.

Reading the patreon page’s description of his process shows he’s taking it quite seriously. Steaming glue, removing staples. I’ve gone on record about the pros and cons of destructive scanning, but game magazines are not rare, just entirely unrepresented in scanned items compared to how many people have these things in their past.

I read something like this:

It is extremely unlikely that I will profit from your pledge any time soon. My scanner alone was over $4,000 and the scanning software was $600. Because I’m working with a high volume of high resolution 600 DPI images I purchased several hard drives including a CalDigit T4 20TB RAID array for $2,000. I have also spent several thousand dollars on the magazines themselves, which become more expensive as they become rarer. This is in addition to the cost of my computer, monitor, and other things which go into the creation of these scans. It may sound like I’m rich but really I’m just motivated, working two jobs and pursuing large projects.

…and all I think about is, this guy is doing so much amazing work that so many thousands could be benefiting from, and they should throw a few bucks at him for his time.

My work consists of carefully removing individual pages from magazines with a heat gun or staple-remover so that the entire page may be scanned. Occasionally I will use a stack paper cutter where appropriate and will not involve loss of page content. I will then scan the pages in my large format ADF scanner into 600 DPI uncompressed TIFFs. From there I either upload 300 DPI JPEGs for others to edit and release on various sites or I will edit them myself and store the 600 DPI versions in backup hard disks. I also take photos of magazines still factory-sealed to document their newsstand appearance. I also rip full ISOs of magazine coverdiscs and make scans of coverdisc sleeves on a color-corrected flatbed scanner and upload those to archive.org as well.

This is the sort of thing I can really get behind.

The Internet Archive is scanning stuff, to be sure, but the focus is on books. Magazines are much, much harder to scan – the book scanners in use are just not as easy to use with something bound like magazines are. The work that Mark is doing is stuff that very few others are doing, and to have canonical scans of the advertisements, writing and materials from magazines that used to populate the shelves is vital.

Some time ago, I’ve given all my collection of donated Game-related magazines to the Museum of Art and Digital Entertainment, because I recognized I couldn’t be scanning them anytime soon, and how difficult it was going to be to scan it. It would take some real major labor I couldn’t personally give.

Well, here it is. He’s been at it for a year. I’d like to see that monthly number jump to $100/month, $500/month, or more. People dropping $5/month towards this Patreon would be doing a lot for this particular body of knowledge.

Please consider doing it.

Thanks.

Source: http://ascii.textfiles.com/archives/5097

Permanent link to this article: https://www.internetking.us/wordpress/2016/11/20/in-which-i-tell-you-its-a-good-idea-to-support-a-magazine-scanning-patreon/

A Simple Explanation: VLC.js

The previous entry got the attention it needed, and the maintainers of the VLC project connected with both Emularity developers and Emscripten developers and the process has begun.

The best example of where we are is this screenshot:

vlcjs

The upshot of this is that a javascript compiled version of the VLC player now runs, spits out a bunch of status and command line information, and then gets cranky it has no video/audio device to use.

With the Emularity project, this was something like 2-3 months into the project. In this case, it happened in 3 days.

The reasons it took such a short time were multi-fold. First, the VLC maintainers jumped right into it at full-bore. They’ve had to architect VLC for a variety of wide-ranging platforms including OSX, Windows, Android, and even weirdos like OS/2; to have something aimed at “web” is just another place to go. (They’d also made a few web plugins in the past.) Second, the developers of Emularity and Emscripten were right there to answer the tough questions, the weird little bumps and switchbacks.

Finally, everybody has been super-energetic about it – diving into the idea, without getting hung up on factors or features or what may emerge; the same flexibility that coding gives the world means that the final item will be something that can be refined and improved.

So that’s great news. But after the initial request went into a lot of screens, a wave of demands and questions came along, and I thought I’d answer some of them to the best of my abilities, and also make some observations as well.

lunettes

When you suggest something somewhat crazy, especially in the programming or development world, there’s a variant amount of response. And if you end up on Hackernews, Reddit, or a number of other high-traffic locations, those reactions fall into some very predictable areas:

  • This would be great if it happens
  • This is fundamentally terrible, let me talk about why for 4 paragraphs
  • You are talking about making a sword. I am a swordmaker. I have many opinions.
  • My sister was attacked by a C library and I’m going to tell a very long story
  • Oh man, Jason Scott, this guy

So, quickly on some of these:

  • It’s understandable some people will want to throw the whole idea under the bus because the idea of the Web Browser playing a major part in transactions is a theoretical hellscape compared to an ideal infrastructure, but that’s what we have and here we go.
  • I know that it sounds like porting things to Javascript is crazy. I find that people think we’re rewriting things from scratch, instead of using Emscripten, which compiles out to Javascript as a target (and later WebAssembly). We do not write from scratch.
  • Browsers do some of this heavy lifting. It depends on the browser on the platform on the day and they do not talk. If there was a way to include a framework to tell a browser what to do with ‘stuff’ and then it brought both the stuff and the instructions in and did the work, great. Yes, there’s plenty of cases of stuff/instructions (Webpage/HTML, Audio/MP3) that browsers take in, but it’s different everywhere.

But let’s shift over to why I think this is important, and why I chose VLC to interact with.

First, VLC is one of those things that people love, or people wish there was something better than, but VLC is what we have. It’s flexible, it’s been well-maintained, and it has been singularly focused. For a very long time, the goal of the project has been aimed at turning both static files AND streams into something you can see on your machine. And the machine you can see it on is pretty much every machine capable of making audio and video work.

Fundamentally, VLC is a bucket that, when dropped into with a very large variance of sound-oriented or visual-oriented files and containers, will do something with them. DVD ISO files become playable DVDs, including all the features of said DVDs. VCDs become craptastic but playable DVDs. MP3, FLAC, MIDI, all of them fall into VLC and start becoming scrubbing-ready sound experiences. There are quibbles here and there about accuracy of reproduction (especially with older MOD-like formats like S3M or .XM) but these are code, and fixable in code. That VLC doesn’t immediately barf on the rug with the amount of crapola that can be thrown at it is enormous.

And completing this thought, by choosing something like VLC, with its top-down open source condition and universal approach, the “closing of the loop” from VLC being available in all browsers instantly will ideally cause people to find the time to improve and add formats that otherwise wouldn’t experience such advocacy. Images into Apple II floppy disk image? Oscilloscope captures? Morse code evaluation? Slow Scan Television? If those items have a future, it’s probably in VLC and it’s much more likely if the web uses a VLC that just appears in the browser, no fuss or muss.

vlc-media-player-dowload-for-windows

Fundamentally, I think my personal motivations are pretty transparent and clear. I help oversee a petabytes-big pile of data at the Internet Archive. A lot of it is very accessible; even more of it is not, or has to have clever “derivations” pulled out of it for access. You can listen to .FLACs that have been uploaded, for example, because we derive (noted) mp3 versions that go through the web easier. Same for the MPG files that become .mp4s and so on, and so on. A VLC that (optionally) can play off the originals, or which can access formats that currently sit as huge lumps in our archives, will be a fundamental world changer.

Imagine playing DVDs right there, in the browser. Or really old computer formats. Or doing a bunch of simple operations to incoming video and audio to improve it without having to make a pile of slight variations of the originals to stream. VLC.js will do this and do it very well. The millions of files that are currently without any status in the archive will join the millions that do have easy playability. Old or obscure ideas will rejoin the conversation. Forgotten aspects will return. And VLC itself, faced with such a large test sample, will get better at replaying these items in the process.

This is why this is being done. This is why I believe in it so strongly.

besthook

I don’t know what roadblocks or technical decisions the team has ahead of it, but they’re working very hard at it, and some sort of prototype seems imminent. The world with this happening will change slightly when it starts working. But as it refines, and as these secondary aspects begin, it will change even more. VLC will change. Maybe even browsers will change.

Access drives preservation. And that’s what’s driving this.

See you on the noisy and image-filled other side.

Source: http://ascii.textfiles.com/archives/5089

Permanent link to this article: https://www.internetking.us/wordpress/2016/11/17/a-simple-explanation-vlc-js/

A Simple Request: VLC.js

Almost five years ago to today, I made a simple proposal to the world: Port MAME/MESS to Javascript.

That happened.

I mean, it cost a dozen people hundreds of hours of their lives…. and there were tears, rage, crisis, drama, and broken hearts and feelings… but it did happen, and the elation and the world we live in now is quite amazing, with instantaneous emulated programs in the browser. And it’s gotten boring for people who know about it, except when they haven’t heard about it until now.

By the way: work continues earnestly on what was called JSMESS and is now called The Emularity. We’re doing experiments with putting it in WebAssembly and refining a bunch of UI concerns and generally making it better, faster, cooler with each iteration. Get involved – come to #jsmess on EFNet or contact me with questions.

In celebration of the five years, I’d like to suggest a new project, one of several candidates I’ve weighed but which I think has the best combination of effort to absolute game-changer in the world.

vlc-media-player-dowload-for-windows

Hey, come back!

It is my belief that a Javascript (later WebAssembly) port of VLC, the VideoLan Player, will fundamentally change our relationship to a mass of materials and files out there, ones which are played, viewed, or accessed. Just like we had a lot of software locked away in static formats that required extensive steps to even view or understand, so too do we have formats beyond the “usual” that are also frozen into a multi-step process. Making these instantaneously function in the browser, all browsers, would be a revolution.

A quick glance at the features list of VLC shows how many variant formats it handles, from audio and sound files through to encapsulations like DVD and VCDs. Files that now rest as hunks of ISOs and .ZIP files that could be turned into living, participatory parts of the online conversation. Also, formats like .MOD and .XM (trust me) would live again effectively.

Also, VLC has weathered years and years of existence, and the additional use case for it would help people contribute to it, much like there’s been some improvements in MAME/MESS over time as folks who normally didn’t dip in there added suggestions or feedback to make the project better in pretty obscure realms.

I firmly believe that this project, fundamentally, would change the relationship of audio/video to the web. 

I’ll write more about this in coming months, I’m sure, but if you’re interested, stop by #vlcjs on EFnet, or ping me on twitter at @textfiles, or write to me at vlcjs@textfiles.com with your thoughts and feedback.

See you.

 

Source: http://ascii.textfiles.com/archives/5084

Permanent link to this article: https://www.internetking.us/wordpress/2016/11/01/a-simple-request-vlc-js/

The Festival Floppies

In 2009, Josh Miller was walking through the Timonium Hamboree and Computer Festival in Baltimore, Maryland. Among the booths of equipment, sales, and demonstrations, he found a vendor was selling an old collection of 3.5″ floppy disks for DOS and Windows. He bought it, and kept it.

A few years later, he asked me if I wanted them, and I said sure, and he mailed them to me. They fell into the everlasting Project Pile, and waited for my focus and attention.

They looked like this:

cs6rforueaagbcg

I was particularly interested in the floppies that appeared to be someone’s compilation of DOS and Windows programs in the most straightforward form possible – custom laser-printed directories on the labels, and no obvious theme as to why this shareware existed on them. They looked like this, separated out:

cs6reiouaaaamnd

There were other floppies in the collection, as well:

cswfhv2xeaa4hrl

They’d sat around for a few years while I worked on other things, but the time finally came this week to spend some effort to extract data.

There’s debates on how to do this that are both boring and infuriating, and I’ve ended friendships over them, so let me just say that I used a USB 3.5″ floppy drive (still available for cheap on Amazon; please take advantage of that) and a program called WinImage that will pull out a disk image in the form of a .ima file from the floppy drive. Yes, I could do a flux imaging of these disks, but sorry, that’s incredibly insane overkill. These disks contain files put on there by a person and we want those files, along with the accurate creation dates and the filenames and contents. WinImage does it.

Sometimes, the floppies have some errors and require trying over to get the data off them. Sometimes it takes a LOT of tries. If after a mass of tries I am unable to do a full disk scan into a disk image, I try just mounting it as A: in Windows and pulling the files off – they sometimes are just fine but other parts of the disk are dead. I make this a .ZIP file instead of a .IMA file. This is not preferred, but the data gets off in some form.

Some of them (just a handful) were not even up for this – they’re sitting in a small plastic bin and I’ll try some other methods in the future. The ratio of Imaged-ZIPed-Dead were very good, like 40-3-3.

I dumped most of the imaged files (along with the ZIPs) into this item.

This is a useful item if you, yourself, want to download about 100 disk image files and “do stuff” with them. My estimation is that all of you can be transported from the first floor to the penthouse of a skyscraper with 4 elevator trips. Maybe 3. But there you go, folks. They’re dropped there and waiting for you. Internet Archive even has a link that means “give me everything at once“. It’s actually not that big at all, of course – about 260 megabytes, less than half of a standard CD-ROM.

I could do this all day. It’s really easy. It’s also something most people could do, and I would hope that people sitting on top of 3.5” floppies from DOS or Windows machines would be up for paying the money for that cheap USB drive and something like WinImage and keep making disk images of these, labeling them as best they can.

I think we can do better, though.

The Archive is running the Emularity, which includes a way to run EM-DOSBOX, which can not only play DOS programs but even play Windows 3.11 programs as well.

Therefore, it’s potentially possible for many of these programs, especially ones particularly suited as stand-alone “applications”, to be turned into in-your-browser experiences to try them out. As long as you’re willing to go through them and get them prepped for emulation.

Which I did.

floppo

The Festival Floppies collection is over 500 programs pulled from these floppies that were imaged earlier this week. The only thing they have in common was that they were sitting in a box on a vendor table in Baltimore in 2009, and I thought in a glance they might run and possibly be worth trying out. After I thought this (using a script to present them for consideration), the script did all the work of extracting the files off the original floppy images, putting the programs into an Internet Archive item, and then running a “screen shotgun” I devised with a lot of help a few years back that plays the emulations, takes the “good shots” and makes them part of a slideshow so you can get a rough idea of what you’re looking at.

00_coverscreenshot

You either like the DOS/Windows aesthetic, or you do not. I can’t really argue with you over whatever direction you go – it’s both ugly and brilliant, simple and complex, dated and futuristic. A lot of it depended on the authors and where their sensibilities lay. I will say that once things started moving to Windows, a bunch of things took on a somewhat bland sameness due to the “system calls” for setting up a window, making it clickable, and so on. Sometimes a brave and hearty soul would jazz things up, but they got rarer indeed. On the other hand, we didn’t have 1,000 hobbyist and professional programs re-inventing the wheel, spokes, horse, buggy, stick shift and gumball machine each time, either.

screenshot_05

Just browsing over the images, you probably can see cases where someone put real work into the whole endeavor – if they seem to be nicely arranged words, or have a particular flair with the graphics, you might be able to figure which ones have the actual programming flow and be useful as well. Maybe not a direct indicator, but certainly a flag. It depends on how much you want to crate-dig through these things.

Let’s keep going.

Using a “word cloud” script that showed up as part of an open source package, I rewrote it into something I call a “DOS Cloud”. It goes through these archives of shareware, finds all the textfiles in the .ZIP that came along for the ride (think README.TXT, READ.ME, FILEID.DIZ and so on) and then runs to see what the most frequent one and two word phrases are. This ends up being super informative, or not informative at all, but it’s automatic, and I like automatic. Some examples:

Mad Painterpaint, mad, painter, truck, joystick, drive, collision, press, cloud, recieve, mad painter, dos prompt

Screamer screamer, code, key, screen, program, press, command, memory, installed,activate, code key, memory resident, correct code, key combination, desired code

D4W20timberline, version, game, sinking, destroyer, gaming, smarter, software,popularity, timberline software, windows version, smarter computer, online help, high score

Certainly in the last case, those words are much more informative than the name D4W20 (which actually stands for “Destroyer for Windows Version 2.0”), and so the machine won the day. I’ve called this “bored intern” level before and I’d say it’s still true – the intern may be bored, but they never stop doing the process, either. I’m sure there’s some nascent class discussion here, but I’ll say that I don’t entirely think this is work for human beings anymore. It’s just more and more algorithms at this point. Reviews and contextual summaries not discernible from analysis of graphics and text are human work.

For now.

screenshot_00

These programs! There are a lot of them, and a good percentage solve problems we don’t have anymore or use entire other methods to deal with the information. Single-use programs to deal with Y2K issues, view process tasks better, configure your modem, add a DOS interface, or track a pregnancy. Utilities to put the stardate in the task bar, applications around coloring letters, and so it goes. I think the screenshots help make decisions, if you’re one of the people idly browsing these sets and have no personal connection to DOS or Windows 3.1 as a lived experience.

I and others will no doubt write more and more complicated methods for extracting or providing metadata for these items, and work I’m doing in other realms goes along with this nicely. At some point, the entries for each program will have a complication and depth that rivals most anything written about the subjects at the time, when they were the state of the art in computing experience. I know that time is coming, and it will be near-automatic (or heavily machine-assisted) and it will allow these legions of nearly-lost programs to live again as easily as a few mouse clicks.

But then what?

screenshot_03

But Then What is rapidly becoming the greatest percentage of my consideration and thought, far beyond the relatively tiny hurdles we now face in terms of emulation and presentation. It’s just math now with a lot of what’s left (making things look/work better on phones, speeding up the browser interactions, adding support for disk swapping or printer output or other aspects of what made a computer experience lasting to its original users). Math, while difficult, has a way of outing its problems over time. Energy yields results. Processing yields processing.

No, I want to know what’s going to happen beyond this situation, when the phones and browsers can play old everything pretty accurately, enough that you’d “get it” to any reasonable degree playing around with it.

Where do we go from there? What’s going to happen now? This is where I’m kind of floating these days, and there are ridiculously scant answers. It becomes very “journey of the mind” as you shake the trees and only nuts come out.

To be sure, there’s a sliver of interest in what could be called “old games” or “retrogaming” or “remixes/reissues” and so on. It’s pretty much only games, it’s pretty much roughly 100 titles, and it’s stuff that has seeped enough into pop culture or whose parent companies still make enough bank that a profit motive serves to ensure the “IP” will continue to thrive, in some way.

The Gold Fucking Standard is Nintendo, who have successfully moved into such a radical space of “protecting their IP” that they’ve really successfully started moving into wrecking some of the past – people who make “fan remixes” might be up for debate as to whether they should do something with old Nintendo stuff, but laying out threats for people recording how they experienced the games, and for any recording of the games for any purpose… and sending legal threats at anyone and everyone even slightly referencing their old stuff, as a core function.. well, I’m just saying perhaps ol’ Nintendo isn’t doing itself any favors but on the other hand they can apparently be the most history-distorting dicks in this space quadrant and the new games still have people buy them in boatloads. So let’s just set aside the Gold Fucking Standard for a bit when discussing this situation. Nobody even comes close.

There’s other companies sort of taking this hard-line approach: “Atari”, Sega, Capcom, Blizzard… but again, these are game companies markedly defending specific games that in many cases they end up making money on. In some situations, it’s only one or two games they care about and I’m not entirely convinced they even remember they made some of the others. They certainly don’t issue gameplay video takedowns and on the whole, historic overview of the companies thrives in the world.

But what a small keyhole of software history these games are! There’s entire other experiences related to software that are both available, and perhaps even of interest to someone who never saw this stuff the first time around. But that’s kind of an educated guess on my part. I could be entirely wrong on this. I’d like to find out!

Pursuing this line of thought has sent me hurtling into What are even musuems and what are even public spaces and all sorts of more general questions that I have extracted various answers for and which it turns out are kind of turmoil-y. It also has informed me that nobody kind of completely knows but holy shit do people without managerial authority have ideas about it. Reeling it over to the online experience of this offline debated environment just solves some problems (10,000 people look at something with the same closeness and all the time in the world to regard it) and adds others (roving packs of shitty consultant companies doing rough searches on a pocket list of “protected materials” and then sending out form letters towards anything that even roughly matches it, and calling it a ($800) day).

Luckily, I happen to work for an institution that is big on experiments and giving me a laughably long leash, and so the experiment of instant online emulated computer experience lives in a real way and can allow millions of people (it’s been millions, believe it or not) to instantly experience those digital historical items every second of every day.

So even though I don’t have the answers, at all, I am happy that the unanswered portions of the Big Questions haven’t stopped people from deriving a little joy, a little wonder, a little connection to this realm of human creation.

That’s not bad.

screenshot_00-1

Source: http://ascii.textfiles.com/archives/5063

Permanent link to this article: https://www.internetking.us/wordpress/2016/09/22/the-festival-floppies/

Why the Apple II ProDOS 2.4 Release is the OS News of the Year

prodos-2-4-splash

In September of 2016, a talented programmer released his own cooked update to a major company’s legacy operating system, purely because it needed to be done. A raft of new features, wrap-in programs, and bugfixes were included in this release, which I stress was done as a hobby project.

The project is understatement itself, simply called Prodos 2.4. It updates ProDOS, the last version of which, 2.0.3, was released in 1993.

You can download it, or boot it in an emulator on the webpage, here.

As an update unto itself, this item is a wonder – compatibility has been repaired for the entire Apple II line, from the first Apple II through to the Apple IIgs, as well as cases of various versions of 6502 CPUs (like the 65C02) or cases where newer cards have been installed in the Apple IIs for USB-connected/emulated drives. Important utilities related to disk transfer, disk inspection, and program selection have joined the image. The footprint is smaller, and it runs faster than its predecessor (a wonder in any case of OS upgrades).

The entire list of improvements, additions and fixes is on the Internet Archive page I put up.

prodos-2-4-bitsy-boot

The reason I call this the most important operating system update of the year is multi-fold.

First, the pure unique experience of a 23-year-gap between upgrades means that you can see a rare example of what happens when a computer environment just sits tight for decades, with many eyes on it and many notes about how the experience can be improved, followed by someone driven enough to go through methodically and implement all those requests. The inclusion of the utilities on the disk means we also have the benefit of all the after-market improvements in functionality that the continuing users of the environment needed, all time-tested, and all wrapped in without disturbing the size of the operating system programs itself. It’s like a gold-star hall of fame of Apple II utilities packed into the OS they were inspired by.

This choreographed waltz of new and old is unique in itself.

Next is that this is an operating system upgrade free of commercial and marketing constraints and drives. Compared with, say, an iOS upgrade that trumpets the addition of a search function or blares out a proud announcement that they broke maps because Google kissed another boy at recess. Or Windows 10, the 1968 Democratic Convention Riot of Operating Systems, which was designed from the ground up to be compatible with a variety of mobile/tablet products that are on the way out, and which were shoved down the throats of current users with a cajoling, insulting methodology with misleading opt-out routes and freakier and freakier fake-countdowns.

The current mainstream OS environment is, frankly, horrifying, and to see a pure note, a trumpet of clear-minded attention to efficiency, functionality and improvement, stands in testament to the fact that it is still possible to achieve this, albeit a smaller, slower-moving target. Either way, it’s an inspiration.

prodos-2-4-bitsy-bye

Last of all, this upgrade is a valentine not just to the community who makes use of this platform, but to the ideas of hacker improvement calling back decades before 1993. The amount of people this upgrade benefits is relatively small in the world – the number of folks still using Apple IIs is tiny enough that nearly everybody doing so either knows each other, or knows someone who knows everyone else. It is not a route to fame, or a resume point to get snapped up by a start-up, or a game of one-upsmanship shoddily slapped together to prove a point or drop a “beta” onto the end as a fig leaf against what could best be called a lab experiment gone off in the fridge. It is done for the sake of what it is – a tool that has been polished and made anew, so the near-extinct audience for it works to the best of their ability with a machine that, itself, is thought of as the last mass-marketed computer designed by a single individual.

That’s a very special day indeed, and I doubt the remainder of 2016 will top it, any more than I think the first 9 months have.

Thanks to John Brooks for the inspiration this release provides. 

Source: http://ascii.textfiles.com/archives/5054

Permanent link to this article: https://www.internetking.us/wordpress/2016/09/15/why-the-apple-ii-prodos-2-4-release-is-the-os-news-of-the-year/

Who’s Going to be the Hip Hop Hero

People often ask me if there’s a way they can help. I think I have something.

So, the Internet Archive has had a wild hit on its hand with the Hip Hop Mixtapes collection, which I’ve been culling from multiple sources and then shoving into the Archive’s drives through a series of increasingly complicated scripts. When I run my set of scripts, they do a good job of yanking the latest and greatest from a selection of sources, doing all the cleanup work, verifying the new mixtapes aren’t already in the collection, and then uploading them. From there, the Archive’s processes do the work, and then we have ourselves the latest tapes available to the world.

Since I see some of these tapes get thousands of listens within hours of being added, I know this is something people want. So, it’s a success all around.

mixtape

With success, of course, comes the two flipside factors: My own interest in seeing the collection improved and expanded, and the complaints from people who know about this subject finding shortcomings in every little thing.

There is a grand complaint that this collection currently focuses on mixtapes from 2000 onwards (and really, 2005 onwards). Guilty. That’s what’s easiest to find. Let’s set that one aside for a moment, as I’ve got several endeavors to improve that.

What I need help with is that there are a mass of mixtapes that quickly fell off the radar in terms of being easily downloadable and I need someone to spend time grabbing them for the collection.

While impressive, the 8,000 tapes up on the archive are actually the ones that were able to be grabbed by scripts, without any hangups, like the tapes falling out of favor or the sites they were offering going down. If you use the global list I have, the total amount of tapes could be as high as 20,000.

Again, it’s a shame that a lot of pre-2000 mixtapes haven’t yet fallen into my lap, but it’s really a shame that mixtapes that existed, were uploaded to the internet, and were readily available just a couple years ago, have faded down into obscurity. I’d like someone (or a coordinated group of someones) help grab those disparate and at-risk mixtapes to get into the collection.

I have information on all these missing tapes – the song titles, the artist information, and even information on mp3 size and what was in the original package. I’ve gone out there and tried to do this work, and I can do it, but it’s not a good use of my time – I have a lot of things I have to do and dedicating my efforts in this particular direction means a lot of other items will suffer.

So I’m reaching out to you. Hit me up at mixtapes@textfiles.com and help me build a set of people who are grabbing this body of work before it falls into darkness.

Thanks.

Source: http://ascii.textfiles.com/archives/5049

Permanent link to this article: https://www.internetking.us/wordpress/2016/09/07/whos-going-to-be-the-hip-hop-hero/

Paying to make Internet unequal

Quoted from: http://www.thewatchdogonline.com/paying-to-make-internet-unequal-18382

For over a decade, net neutrality has been a topic of heated discussion in the technology world. Put plainly, net neutrality means that everybody can access the same Internet in the same way, regardless of one’s Internet Service Provider or what they are accessing. Today, Comcast doesn’t really care if a user is accessing a U.S. government site, reading anarchist literature, gaming, watching YouTube or farting around Facebook. Losing net neutrality would mean that service providers would be able to change what can be accessed and how on the Internet.

One of the single most distinctive and important aspects of the Internet is how anybody can do anything on it, even a punk kid living in his mother’s basement eating Hot Pockets can make millions with the right idea. What makes the Internet such an incredible force in today’s world is how accessible it is to anybody of any status. The homeless and destitute can get online at locations all over the place and be equal to everybody else – write essays that are instantly available worldwide, watch lectures on any subject under the sun or play chess against the best of the best. The Internet is a great equalizing factor and the benefit it has provided to humanity will probably never be fully appreciated. On April 23, the FCC announced that it would put forth new rules to let companies pay ISPs for faster data rates, what has been described as a “fast lane” for the Internet. What this means is the richest corporations in the country can make their content more easily

accessed and the Internet ceases to be an equalizer and becomes yet another facet of our corporatist economy. The raw amount of competition on the Internet is amazing. The number of social networks, games, video streaming sites and news outlets are astronomical, and the only way one succeeds is by being a better product.

The end of net neutrality changes everything. When a small business has an idea and Google has an idea, Google has the millions to pay ISPs to make its idea work faster while the small business is considering which brand of instant ramen is cheaper. The Internet is an amazing source of alternative news media. Every day, stories not seen on TV are shared to millions of Americans, things for whatever reason not covered by the major networks. The new FCC rules will allow the major networks to make their sites run faster, gaining more viewers, increasing ad revenue and outcompeting alternative media outlets. Small businesses already have a hard enough time surviving and competing in the economy we have now, with millions of pages of regulations, a skyrocketing minimum wage and millions of dollars of subsidies going to politically-connected corporations. The principle that all content on the Internet was equal gave everybody equal footing and equal opportunity and that era appears to be at an end. Is it just a coincidence that the current head of the FCC, Thomas Wheeler, worked as a lobbyist for cable companies and has been inducted into the Cable Television Hall of Fame? This also gives our government the ability to hinder access to materials it doesn’t agree with, and with its track record, how can it be trusted?

Net_Neutrality_Panel

Permanent link to this article: https://www.internetking.us/wordpress/2014/05/05/paying-to-make-internet-unequal/

Please Disable your Ad Blocker.

you!

                            

 

Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker.

Close
Skip to toolbar