[Washington Legislature] (URGENT ATTENTION) SB 5142 – educational interpreter standard amendment

There is a new bill in the House and Senate that could potentially impact our current educational interpreter standard.




The Washington legislature passed HB 1144 back in 2012, which mandates all interpreters working in the K-12 setting meet a minimum qualification. HB 1144 granted authority to the professional educational standards board to determine which test Washington would use to assess interpreters. After a lot of research and debate, the board narrowly voted to adopt the EIPA test (recognized by RID and used in most states) instead of the ESSE (a test administered by the SEE Center). SEE advocates have been really, really upset about that decision ever since, and they have tried numerous times to introduce legislation that changes the standard to include the ESSE. DPAC, WSAD, and WSRID have collaborated to block this legislation from moving forward in the past. The legislature has only been hesitant to adopt this ESSE loophole legislation because there is no validity or reliability data to support the ESSE.



SB 5142


On January 13, 2017, the Senate introduced 5142 (and companion HB 1303. See here http://app.leg.wa.gov/billsummary?BillNumber=5142&Year=2017


This bill would do 2 things


1) This bill gives interpreters who have not received a passing score on the EIPA one more year to be able to take the test again, and


2) This bill asks the office of the superintendent to provide a report about how much it would cost for the legislature to actually pay for the ESSE validity and reliability data.


Potential Concerns


1) One Year Extension

  • This could be a good thing as long as 1) the extension is for one year and one year only; 2) the legislature properly funds mentorship and training opportunities for interpreters that need to retake the test, AND 3) the legislature properly funds school districts to be able to better recruit competent interpreters to replace interpreters that are not able to pass the EIPA.


2) Report About Funding ESSE Research

  • This part of the bill is a bigger concern to me because it is addressing a made-up, fantasy problem designed by SEE advocates to tug at the heart strings and the pocket books of the legislature. The SEE advocates have convinced representatives that there are all these deaf students in our school system that only understand SEE and are deprived of interpreters under the current standard.
  • First, we do not need a test that assesses SEE because the EIPA already adequately assesses the competency of SEE interpreters. We do not want to create an ESSE loophole for incompetent interpreters.
  • Second, the legislature seems to be considering paying for research for a private, for-profit school, which has a harmful educational philosophy.
  • Third, a competent ASL interpreter can learn SEE in a very short amount of time, but the opposite is not true. If there was truly a lack of SEE interpreters, competent ASL interpreters could be quickly trained and reassigned to those placements when needed.
  • Finally, we could be stuck with bad research. If this bill passes and the legislature decides to fund this research, the results of the research could inappropriately validate the ESSE, and then we lose our argument that this is a bad test. The legislature seems eager to move ahead with adoption of the ESSE, but all they need to do that is that validity and reliability data. If that data is obtained, it will be much harder to prevent ESSE legislation from moving forward in the future.


What should you do?


1) Contact your representatives and invite them to attend the legislative reception on Feb, 1st! We will be there even if you can’t make it, and we would love to talk to your representatives about this bill.



2) Let your representative know who you are and that you oppose this bill unless it is amended to strike Section 2.


3) Come to the committee hearing to testify or oppose this bill unless amended. No date is set, but please keep an eye out on the Washington Legislature website for notice of the next hearing.


Permanent link to this article: https://www.internetking.us/wordpress/2017/01/20/washington-legislature-urgent-attention-sb-5142-educational-interpreter-standard-amendment/

What you need to know: Anti-Trump protests planned in Seattle

Multiple anti-Trump protests and events are scheduled for Seattle around the inauguration. This comes after series of protests in the city, such as this one, immediately after the election. (KIRO 7)


This article has been updated with additional anti-Trump protests planned for the Seattle area.

Protests immediately erupted in the Seattle after Donald Trump defeated Democrat Hillary Clinton in the electoral college. Those protests may have just been a preview for Seattle.

Related: Kshama Sawant warns of more Trump protests in the future

People in the Seattle area plan to continue their opposition with a series of anti-Trump protests and events as Trump takes over the Oval Office. Nearly 2.9 million more people voted for Trump’s rival Clinton in the popular vote. The Seattle region was one of the areas that came out strong for Clinton. And more than 6 million people voted for third-party candidates.

Anti-Trump protests in Seattle

There are three major events planned and targeted at President-elect Trump.

Jan. 18: Poster-making party for upcoming anti-Trump protests.

Jan. 19: Guerrilla Art School’s Night of Resistance. Artful Trump protests.

Jan. 20: On Inauguration Day, a Resist Trump: Occupy Inauguration – Seattle!” protest is planned for downtown Seattle at Westlake Park from 5-8 p.m. It is organized by Sawant’s political party, the Socialist Alternative, as well as Socialist Students of Seattle. The Facebook event indicates that more than 11,000 people are interested in participating in the demonstration, with 3,900 people confirmed as attending, and another 9,300 people invited to the protest.

The event specifically cites opposition to building a wall on the Mexican border, stopping the Dakota Access Pipeline, ending rape culture, and supporting Black Lives Matter. The Facebook event page reads:

The Democratic Party has proven they are incapable of stopping Trump. It is time to build a new party for the 99% based on the united power of all exploited and oppressed people, on movements for social and economic justice, on the belief that we CAN do better than this corrupt and rotten system!

#ResistTrump !! #OccupyInauguration !!

Jan. 20: Beer Trumps Hate. Protesting Trump while having a beer. A fundraiser for the ACLU in resistance to the new president. 11:30-11:55 am at The Red Door in Fremont.

Jan. 20: Seattle student walk out at. Centered on Seattle Central College, inviting students to walk out of class at 12-noon in protest. About 250 people are expected to participate.

Jan. 20: Race for our Rights 5K. A running event in Magnuson Park from 6-8 p.m., set up to raise funds for Planned Parenthood in light of Trump’s election.

Jan. 20: Protest Milo Yiannopoulos at University of Washington. People are planning to protest an appearance of this controversial conservative at UW. Doors at 7.

Jan. 20: Bed-In for Peace. Inspired by John Lennon and Yoko Ono’s 1969 bed-in protest from 8 a.m.-9:30am. Organized by KEXP.

Jan. 21: Another 5K fundraiser for Planned Parenthood in protest of Trump’s inauguration at 9:30 a.m. The run will start at the boathouse and go around Green Lake.

Jan. 21: The day after the inauguration, the “Women’s March on Seattle” is planned between 10 a.m. and 4 p.m. in downtown Seattle. A route for the march has yet to be released, but the Facebook event page states it starts at Judkins Park at 11 a.m. and ends at the Seattle Center. The women’s march is organized be four private citizens. As of Tuesday morning, the event has 30,000 people signed on for the Seattle march, with 41,000 more people interested in attending.

The event announcement reads:

In solidarity with the march taking place in Washington, DC, we will march in Seattle. ALL women, femme, trans, gender non-conforming, and feminist people (including men and boys) are invited to march. We are showing our support for the community members who have been marginalized by the recent election.

The Seattle women’s march is meant to coincide with the larger, national march on Washington D.C. that same day.

Jan. 22: Pantsuit 5K run/walk in protest of Trump’s inauguration from 9 a.m.-10 a.m. Also benefiting Planned Parenthood, and also around Green Lake.

MyNorthwest included events in this post that will likely take over public or common space. There are additional events, some that require tickets that can be found here.

Permanent link to this article: https://www.internetking.us/wordpress/2017/01/20/what-you-need-to-know-anti-trump-protests-planned-in-seattle/

“Resist Trump” Gathering at Westlake Center Plaza, Jan. 20 (Friday), 4-7PM

There will be a huge gathering for “resist Trump”  rally before his

Westlake Center Plaza.
Friday January 20th.
4pm to 7pm.

Interpreters: Jeff Wildenstein and Michelle Sumner.

The event is actually all day long.   But the main event is to begin at

One of the speakers is Kshama Sawant.  She personally invited the Deaf
community to be present in recognition that even Deaf folks may be
impacted by the incoming Trump administration.

Please use public transportation (bus and light rail) as the easiest way
to get there.   Traffic may be at a standstill.



Permanent link to this article: https://www.internetking.us/wordpress/2017/01/20/resist-trump-gathering-at-westlake-center-plaza-jan-20-friday-4-7pm/

Allena, Founder of CSPC was lay off from her job.

There was a bad news regarding CSPC Founder…. I am upset about it….

Quoted from an email regarding Allena:

Moving to a new building is an amazing opportunity, and a big challenge. This is going to be a better space for us in every way, but the buildout is proving to be even more expensive than we’d thought. The permitting process is also taking longer for reasons outside of our control. It’s not 1999; we can’t keep money coming in by throwing regular parties in a construction zone. In order to make it through this, we’re reducing staff, and that means laying off Allena Gabosch.

Allena has been involved with the Center for Sex Positive Culture from the beginning. She managed the club for fifteen years as the Executive Director, going above and beyond the call of duty too many times to count. For the last couple of years she’s continued to be involved part-time as the Development Director. She’s also been Mom to many of us.

This is a hard and unpleasant choice. We planned for an expensive buildout and a few months dark, but both are proving to be even more than expected. We’re going to call on everyone to please pitch in throughout this process. We have to both reduce our ongoing operating costs and raise money to make up the additional buildout expenses in order to survive until we are able to reopen in our new home.

We are going to miss Allena so much. I can’t say how much of a loss this is to me personally, and I think that’s probably true for every single other person on the board. She has given many years of her life to this organization, creating and shaping it. I hope that going forward we as an organization, as a community, and as individuals are able to find ways to honor her as much as she deserves. I wish her well with all my heart.

Thank you, Allena.

Russell Harmon
President, Board of Directors

here her Facebook post with her reaction to it and leave her comments about it…

Permanent link to this article: https://www.internetking.us/wordpress/2017/01/20/allena/

Hello All…

Welcome to my new blog….


It will have diffrent news, stuff in general as well gor or BDSM…. 🙂

Permanent link to this article: https://www.internetking.us/wordpress/2017/01/20/hello-all/

Now That’s What I Call Script-Assisted-Classified Pattern Recognized Music

Merry Christmas; here is over 500 days (12,000 hours) of music on the Internet Archive.

Go choose something to listen to while reading the rest of this. I suggest either something chill or perhaps this truly unique and distinct ambient recording.


Let’s be clear. I didn’t upload this music, I certainly didn’t create it, and actually I personally didn’t classify it. Still, 500 Days of music is not to be ignored. I wanted to talk a little bit about how it all ended up being put together in the last 7 days.

One of the nice things about working for a company that stores web history is that I can use it to do archaeology against the company itself. Doing so, I find that the Internet Archive started soliciting “the people” to begin uploading items en masse around 2003. This is before YouTube, and before a lot of other services out there.

I spent some time tracking dates of uploads, and you can see various groups of people gathering interest in the Archive as a file destination in these early 00’s, but a relatively limited set all around.

Part of this is that it was a little bit of a non-intuitive effort to upload to the Archive; as people figured it all out, they started using it, but a lot of other people didn’t. Meanwhile, Youtube and other also-rans come into being and they picked up a lot of the “I just want to put stuff up” crowd.

By 2008, things start to take off for Internet Archive uploads. By 2010, things take off so much that 2008 looks like nothing. And now it’s dozens or hundreds of uploads of multi-media uploads a day through all the Archive’s open collections, not to count others who work with specific collections they’ve been given administration of.

In the case of the general uploads collection of audio, which I’m focusing on in this entry, the number of items is now at over two million.

This is not a sorted, curated, or really majorly analyzed collection, of course. It’s whatever the Internet thought should be somewhere. And what ideas they have!

Quality is variant. Finding things is variant, although the addition of new search facets and previews have made them better over the years.

I decided to do a little experiment: slight machine-assisted “find some stuff” sorting. Let it loose on 2 million items in the hopper, see what happens. The script was called Cratedigger.

Previously, I did an experiment against keywording on texts at the archive – the result was “bored intern” level, which was definitely better than nothing, and in some cases, that bored internet could slam through a 400 page book and determine a useful word cloud in less than a couple seconds. Many collections of items I uploaded have these word clouds now.

It’s a little different with music. I went about it this way with a single question:

  • Hey, uploader – could you be bothered to upload a reference image of some sort as well as your music files? Welcome to Cratediggers.

Cratediggers is not an end-level collection – it’s a holding bay to do additional work, but it does show the vast majority of people would upload a sound file and almost nothing else. (I’ve not analyzed quality of description metadata in the no-image items – that’ll happen next.) The resulting ratio of items-in-uploads to items-for-cratediggers is pretty striking – less than 150,000 items out of the two million passed this rough sort.

The Bored Audio Intern worked pretty OK. By simply sending a few parameters, The Cratediggers Collection ended up building on itself by the thousands without me personally investing time. I could then focus on more specific secondary scripts that do things and an even more lazy manner, ensuring laziness all the way down.

The next script allowed me to point to an item in the cratediggers collection and say “put everything by this uploader that is in Cratediggers into this other collection”, with “this other collection” being spoken word, sermons, or music. In general, a person who uploaded music that got into Cratediggers generally uploaded other music. (Same with sermons and spoken word.) It worked well enough that as I ran these helper scripts, they did amazingly well. I didn’t have to do much beyond that.

As of this writing, the music collection contains over 400 solid days of Music. They are absolutely genre-busting, ranging from industrial and noise all the way through beautiful Jazz and acapella. There are one-of-a-kind Rock and acoustic albums, and simple field recordings of Live Events.

And, ah yes, the naming of this collection… Some time ago I took the miscellaneous texts and writings and put them into a collection called Folkscanomy.

After trying to come up with the same sort of name for sound, I discovered a very funny thing: you can’t really attached any two words involving sound together and not already have some company that has the name of Manufacturers using it. Trust me.

And that’s how we ended up with Folksoundomy.

What a word!

The main reason for this is I wanted something unique to call this collection of uploads that didn’t imply they were anything other than contributed materials to the Archive. It’s a made-up word, a zesty little portmanteau that is nowhere else on the Internet (yet). And it leaves you open for whatever is in them.

So, about the 500 days of music:

Absolutely, one could point to YouTube and the mass of material being uploaded there as being superior to any collection sitting on the archive. But the problem is that they have their own robot army, which is a tad more evil than my robotic bored interns; you have content scanners that have both false positives and strange decorations, you have ads being put on the front of things randomly, and you have a whole family of other small stabs and Jabs towards an enjoyable experience getting in your way every single time. Internet Archive does not log you, require a login, or demand other handfuls of your soul. So, for cases where people are uploading their own works and simply want them to be shared, I think the choice is superior.

This is all, like I said, an experiment – I’m sure the sorting has put some things in the wrong place, or we’re missing out on some real jewels that didn’t think to make a “cover” or icon to the files. But as a first swipe, I moved 80,000 items around in 3 days, and that’s more than any single person can normally do.

There’s a lot more work to do, but that music collection is absolutely filled with some beautiful things, as is the whole general Folksoundomy collection. Again, none of this is me, or some talent I have – this is the work of tens of thousands of people, contributing to the Archive to make it what it is, and while I think the Wayback Machine has the lion’s share of the Archive’s world image (and deserves it), there’s years of content and creation waiting to be discovered for anyone, or any robot, that takes a look.

Source: http://ascii.textfiles.com/archives/5117

Permanent link to this article: https://www.internetking.us/wordpress/2016/12/24/now-thats-what-i-call-script-assisted-classified-pattern-recognized-music/

Back That Thing Up


I’m going to mention two backup projects. Both have been under way for some time, but the world randomly decided the end of November 2016 was the big day, so here I am.

The first is that the Internet Archive is adding another complete mirror of the Wayback machine to one of our satellite offices in Canada. Due to the laws of Canada, to be able to do “stuff” in the country, you need to set up a separate company from your US concern. If you look up a lot of major chains and places, you’ll find they all have Canadian corporations. Well, so does the Internet Archive and that separate company is in the process of getting a full backup of the Wayback machine and other related data. It’s 15 petabytes of material, or more. It will cost millions of dollars to set up, and that money is already going out the door.

So, if you want, you can go to the donation page and throw some money in that direction and it will make the effort go better. That won’t take very long at all and you can feel perfectly good about yourself. You need read no further, unless you have an awful lot of disk space, at which point I suggest further reading.


Whenever anything comes up about the Internet Archive’s storage solutions, there’s usually a fluttery cloud of second-guessing and “big sky” suggestions about how everything is being done wrong and why not just engage a HBF0_X2000-PL and fark a whoziz and then it’d be solved. That’s very nice, but there’s about two dozen factors in running an Internet Archive that explain why RAID-1 and Petabyte Towers combined with self-hosting and non-cloud storage has worked for the organization. There are definitely pros and cons to the whole thing, but the uptime has been very good for the costs, and the no-ads-no-subscription-no-login model has been working very well for years. I get it – you want to help. You want to drop the scales from our eyes and you want to let us know about the One Simple Trick that will save us all.

That said, when this sort of insight comes out, it’s usually back-of-napkin and done by someone who will be volunteering several dozen solutions online that day, and that’s a lot different than coming in for a long chat to discuss all the needs. I think someone volunteering a full coherent consult on solutions would be nice, but right now things are working pretty well.

There are backups of the Internet Archive in other countries already; we’re not that bone stupid. But this would be a full, consistently, constantly maintained full backup in Canada, and one that would be interfaced with other worldwide stores. It’s a preparation for an eventuality that hopefully won’t come to pass.

There’s a climate of concern and fear that is pervading the landscape this year, and the evolved rat-creatures that read these words in a thousand years will be able to piece together what that was. But regardless of your take on the level of concern, I hope everyone agrees that preparation for all eventualities is a smart strategy as long as it doesn’t dilute your primary functions. Donations and contributions of a monetary sort will make sure there’s no dilution.

So there’s that.

Now let’s talk about the backup of this backup a great set of people have been working on.


About a year ago, I helped launch INTERNETARCHIVE.BAK. The goal was to create a fully independent distributed copy of the Internet Archive that was not reliant on a single piece of Internet Archive hardware and which would be stored on the drives of volunteers, with 3 geographically distributed copies of the data worldwide.

Here’s the current status page of the project. We’re backing up 82 terabytes of information as of this writing. It was 50 terabytes last week. My hope is that it will be 1,000 terabytes sooner rather than later. Remember, this is 3 copies, so to do each terabyte needs three terabytes.

For some people, a terabyte is this gigantically untenable number and certainly not an amount of disk space they just have lying around. Other folks have, at their disposal, dozens of terabytes. So there’s lots of hard drive space out there, just not evenly distributed.

The IA.BAK project is a complicated one, but the general situation is that it uses the program git-annex to maintain widely-ranged backups from volunteers, with “check-in” of data integrity on a monthly basis. It has a lot of technical meat to mess around with, and we’ve had some absolutely stunning work done by a team of volunteering developers and maintainers (and volunteers) as we make this plan work on the ground.

And now, some thoughts on the Darkest Timeline.


I’m both an incredibly pessimistic and optimistic person. Some people might use the term “pragmatic” or something less charitable.

Regardless, I long ago gave up assumptions that everything was going to work out OK. It has not worked out OK in a lot of things, and there’s a lot of broken and lost things in the world. There’s the pessimism. The optimism is that I’ve not quite given up hope that something can’t be done about it.

I’ve now dedicated 10% of my life to the Internet Archive, and I’ve dedicated pretty much all of my life to the sorts of ideals that would make me work for the Archive. Among those ideals are free expression, gathering of history, saving of the past, and making it all available to as wide an audience, without limit, as possible. These aren’t just words to me.

Regardless of if one perceives the coming future as one rife with specific threats, I’ve discovered that life is consistently filled with threats, and only vigilance and dedication can break past the fog of possibilities. To that end, the Canadian Backup of the Internet Archive and the IA.BAK projects are clear bright lines of effort to protect against all futures dark and bright. The heritage, information and knowledge within the Internet Archive’s walls are worth protecting at all cost. That’s what drives me and why these two efforts are more than just experiments or configurations of hardware and location.

So, hard drives or cash, your choice. Or both!

Source: http://ascii.textfiles.com/archives/5110

Permanent link to this article: https://www.internetking.us/wordpress/2016/11/29/back-that-thing-up/

In Which I Tell You It’s A Good Idea To Support a Magazine-Scanning Patreon

So, Mark Trade and I have never talked, once.

All I know about Mark is that due to his efforts, over 200 scans of magazines are up on the Archive.


These are very good scans, too. The kind of scans that a person looking to find a long-lost article, verify a hard-to-grab fact, or needs to pass along to others a great image would kill to have. 600 dots per inch, excellent contrast, clarity, and the margins cut just right.


So, I could fill this entry with all the nice covers, but covers are kind of easy, to be frank. You put them face down on the scanner, you do a nice big image, and then touch it up a tad. The cover paper and the printing is always super-quality compared to the rest, so it’ll look good:


But the INSIDE stuff… that’s so much harder. Magazines were often bound in a way that put the images RIGHT against the binding and not every magazine did the proper spacing and all of it is very hard to shove into a scanner and not lose some information. I have a lot of well-meaning scans in my life with a lot of information missing.

But these…. these are primo.




When I stumbled on the Patreon, he had three patrons giving him $10 a month. I’d like it to be $500, or $1000. I want this to be his full-time job.

Reading the patreon page’s description of his process shows he’s taking it quite seriously. Steaming glue, removing staples. I’ve gone on record about the pros and cons of destructive scanning, but game magazines are not rare, just entirely unrepresented in scanned items compared to how many people have these things in their past.

I read something like this:

It is extremely unlikely that I will profit from your pledge any time soon. My scanner alone was over $4,000 and the scanning software was $600. Because I’m working with a high volume of high resolution 600 DPI images I purchased several hard drives including a CalDigit T4 20TB RAID array for $2,000. I have also spent several thousand dollars on the magazines themselves, which become more expensive as they become rarer. This is in addition to the cost of my computer, monitor, and other things which go into the creation of these scans. It may sound like I’m rich but really I’m just motivated, working two jobs and pursuing large projects.

…and all I think about is, this guy is doing so much amazing work that so many thousands could be benefiting from, and they should throw a few bucks at him for his time.

My work consists of carefully removing individual pages from magazines with a heat gun or staple-remover so that the entire page may be scanned. Occasionally I will use a stack paper cutter where appropriate and will not involve loss of page content. I will then scan the pages in my large format ADF scanner into 600 DPI uncompressed TIFFs. From there I either upload 300 DPI JPEGs for others to edit and release on various sites or I will edit them myself and store the 600 DPI versions in backup hard disks. I also take photos of magazines still factory-sealed to document their newsstand appearance. I also rip full ISOs of magazine coverdiscs and make scans of coverdisc sleeves on a color-corrected flatbed scanner and upload those to archive.org as well.

This is the sort of thing I can really get behind.

The Internet Archive is scanning stuff, to be sure, but the focus is on books. Magazines are much, much harder to scan – the book scanners in use are just not as easy to use with something bound like magazines are. The work that Mark is doing is stuff that very few others are doing, and to have canonical scans of the advertisements, writing and materials from magazines that used to populate the shelves is vital.

Some time ago, I’ve given all my collection of donated Game-related magazines to the Museum of Art and Digital Entertainment, because I recognized I couldn’t be scanning them anytime soon, and how difficult it was going to be to scan it. It would take some real major labor I couldn’t personally give.

Well, here it is. He’s been at it for a year. I’d like to see that monthly number jump to $100/month, $500/month, or more. People dropping $5/month towards this Patreon would be doing a lot for this particular body of knowledge.

Please consider doing it.


Source: http://ascii.textfiles.com/archives/5097

Permanent link to this article: https://www.internetking.us/wordpress/2016/11/20/in-which-i-tell-you-its-a-good-idea-to-support-a-magazine-scanning-patreon/

A Simple Explanation: VLC.js

The previous entry got the attention it needed, and the maintainers of the VLC project connected with both Emularity developers and Emscripten developers and the process has begun.

The best example of where we are is this screenshot:


The upshot of this is that a javascript compiled version of the VLC player now runs, spits out a bunch of status and command line information, and then gets cranky it has no video/audio device to use.

With the Emularity project, this was something like 2-3 months into the project. In this case, it happened in 3 days.

The reasons it took such a short time were multi-fold. First, the VLC maintainers jumped right into it at full-bore. They’ve had to architect VLC for a variety of wide-ranging platforms including OSX, Windows, Android, and even weirdos like OS/2; to have something aimed at “web” is just another place to go. (They’d also made a few web plugins in the past.) Second, the developers of Emularity and Emscripten were right there to answer the tough questions, the weird little bumps and switchbacks.

Finally, everybody has been super-energetic about it – diving into the idea, without getting hung up on factors or features or what may emerge; the same flexibility that coding gives the world means that the final item will be something that can be refined and improved.

So that’s great news. But after the initial request went into a lot of screens, a wave of demands and questions came along, and I thought I’d answer some of them to the best of my abilities, and also make some observations as well.


When you suggest something somewhat crazy, especially in the programming or development world, there’s a variant amount of response. And if you end up on Hackernews, Reddit, or a number of other high-traffic locations, those reactions fall into some very predictable areas:

  • This would be great if it happens
  • This is fundamentally terrible, let me talk about why for 4 paragraphs
  • You are talking about making a sword. I am a swordmaker. I have many opinions.
  • My sister was attacked by a C library and I’m going to tell a very long story
  • Oh man, Jason Scott, this guy

So, quickly on some of these:

  • It’s understandable some people will want to throw the whole idea under the bus because the idea of the Web Browser playing a major part in transactions is a theoretical hellscape compared to an ideal infrastructure, but that’s what we have and here we go.
  • I know that it sounds like porting things to Javascript is crazy. I find that people think we’re rewriting things from scratch, instead of using Emscripten, which compiles out to Javascript as a target (and later WebAssembly). We do not write from scratch.
  • Browsers do some of this heavy lifting. It depends on the browser on the platform on the day and they do not talk. If there was a way to include a framework to tell a browser what to do with ‘stuff’ and then it brought both the stuff and the instructions in and did the work, great. Yes, there’s plenty of cases of stuff/instructions (Webpage/HTML, Audio/MP3) that browsers take in, but it’s different everywhere.

But let’s shift over to why I think this is important, and why I chose VLC to interact with.

First, VLC is one of those things that people love, or people wish there was something better than, but VLC is what we have. It’s flexible, it’s been well-maintained, and it has been singularly focused. For a very long time, the goal of the project has been aimed at turning both static files AND streams into something you can see on your machine. And the machine you can see it on is pretty much every machine capable of making audio and video work.

Fundamentally, VLC is a bucket that, when dropped into with a very large variance of sound-oriented or visual-oriented files and containers, will do something with them. DVD ISO files become playable DVDs, including all the features of said DVDs. VCDs become craptastic but playable DVDs. MP3, FLAC, MIDI, all of them fall into VLC and start becoming scrubbing-ready sound experiences. There are quibbles here and there about accuracy of reproduction (especially with older MOD-like formats like S3M or .XM) but these are code, and fixable in code. That VLC doesn’t immediately barf on the rug with the amount of crapola that can be thrown at it is enormous.

And completing this thought, by choosing something like VLC, with its top-down open source condition and universal approach, the “closing of the loop” from VLC being available in all browsers instantly will ideally cause people to find the time to improve and add formats that otherwise wouldn’t experience such advocacy. Images into Apple II floppy disk image? Oscilloscope captures? Morse code evaluation? Slow Scan Television? If those items have a future, it’s probably in VLC and it’s much more likely if the web uses a VLC that just appears in the browser, no fuss or muss.


Fundamentally, I think my personal motivations are pretty transparent and clear. I help oversee a petabytes-big pile of data at the Internet Archive. A lot of it is very accessible; even more of it is not, or has to have clever “derivations” pulled out of it for access. You can listen to .FLACs that have been uploaded, for example, because we derive (noted) mp3 versions that go through the web easier. Same for the MPG files that become .mp4s and so on, and so on. A VLC that (optionally) can play off the originals, or which can access formats that currently sit as huge lumps in our archives, will be a fundamental world changer.

Imagine playing DVDs right there, in the browser. Or really old computer formats. Or doing a bunch of simple operations to incoming video and audio to improve it without having to make a pile of slight variations of the originals to stream. VLC.js will do this and do it very well. The millions of files that are currently without any status in the archive will join the millions that do have easy playability. Old or obscure ideas will rejoin the conversation. Forgotten aspects will return. And VLC itself, faced with such a large test sample, will get better at replaying these items in the process.

This is why this is being done. This is why I believe in it so strongly.


I don’t know what roadblocks or technical decisions the team has ahead of it, but they’re working very hard at it, and some sort of prototype seems imminent. The world with this happening will change slightly when it starts working. But as it refines, and as these secondary aspects begin, it will change even more. VLC will change. Maybe even browsers will change.

Access drives preservation. And that’s what’s driving this.

See you on the noisy and image-filled other side.

Source: http://ascii.textfiles.com/archives/5089

Permanent link to this article: https://www.internetking.us/wordpress/2016/11/17/a-simple-explanation-vlc-js/

A Simple Request: VLC.js

Almost five years ago to today, I made a simple proposal to the world: Port MAME/MESS to Javascript.

That happened.

I mean, it cost a dozen people hundreds of hours of their lives…. and there were tears, rage, crisis, drama, and broken hearts and feelings… but it did happen, and the elation and the world we live in now is quite amazing, with instantaneous emulated programs in the browser. And it’s gotten boring for people who know about it, except when they haven’t heard about it until now.

By the way: work continues earnestly on what was called JSMESS and is now called The Emularity. We’re doing experiments with putting it in WebAssembly and refining a bunch of UI concerns and generally making it better, faster, cooler with each iteration. Get involved – come to #jsmess on EFNet or contact me with questions.

In celebration of the five years, I’d like to suggest a new project, one of several candidates I’ve weighed but which I think has the best combination of effort to absolute game-changer in the world.


Hey, come back!

It is my belief that a Javascript (later WebAssembly) port of VLC, the VideoLan Player, will fundamentally change our relationship to a mass of materials and files out there, ones which are played, viewed, or accessed. Just like we had a lot of software locked away in static formats that required extensive steps to even view or understand, so too do we have formats beyond the “usual” that are also frozen into a multi-step process. Making these instantaneously function in the browser, all browsers, would be a revolution.

A quick glance at the features list of VLC shows how many variant formats it handles, from audio and sound files through to encapsulations like DVD and VCDs. Files that now rest as hunks of ISOs and .ZIP files that could be turned into living, participatory parts of the online conversation. Also, formats like .MOD and .XM (trust me) would live again effectively.

Also, VLC has weathered years and years of existence, and the additional use case for it would help people contribute to it, much like there’s been some improvements in MAME/MESS over time as folks who normally didn’t dip in there added suggestions or feedback to make the project better in pretty obscure realms.

I firmly believe that this project, fundamentally, would change the relationship of audio/video to the web. 

I’ll write more about this in coming months, I’m sure, but if you’re interested, stop by #vlcjs on EFnet, or ping me on twitter at @textfiles, or write to me at vlcjs@textfiles.com with your thoughts and feedback.

See you.


Source: http://ascii.textfiles.com/archives/5084

Permanent link to this article: https://www.internetking.us/wordpress/2016/11/01/a-simple-request-vlc-js/

Please Disable your Ad Blocker.




Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker.

Skip to toolbar