“I’ve lost all my work!” That was the first thought in my head as I found out that the Media Editing Suite, affectionately called “Andy Land”, had been flooded with water from the ceiling due to a … well, I still don’t know the cause really. I’ve listened to words come out of people’s mouths, but I can’t make sense of it. Frozen and thawed pipes. HVAC units failing. Antifreeze. The cause is immaterial. The incident has caused countless people headaches and almost criminal wastes of time.
So while waiting for a backup to occur on the Mac Pro that was in the room – it was still running when the Building Manager, Cartland opened the door and then kicked of the power – I created a therapeutic video. Nothing special, just some footage that I was able to recover from the Blackmagic 4K camera that I used just six days before. Yeah, you heard right. I was able to backup the hard drive on the Mac Pro, which I turned on today and got connected to some of the other equipment that I was able to get running.
My work, for the moment, is safe. I wasted a good amount of time the last couple of days. We had visitors from Michigan State that I would have liked to spend more time with. Other UMW personnel have lost some of their work time as well. If it wasn’t for Cartland coming in on his day off, the flood may have been discovered a lot later. DTLT folks have been displaced from their ITCC work spaces – only 7 months old – obviously the building isn’t able to walk by itself yet.
It’s been bittersweet this new building of ours. Student love the space. It’s a roaring success in terms of filling a need. I continue to hope that it remains worthy of the talented students we have at UMW. But Tuesday, February 17, 2015 was a setback for that hope. Next we’ll talk about who’s fault it was and who pays for the damage and what’s “covered”.
As I said on Twitter, “it’s just stuff”. But it’s also a representation of work, and not just mine. The people who helped order and receive the stuff. The people who delivered and even set up the stuff. We do our work to get paid, mostly. Some of us are lucky enough to work at things we’re passionate about. Sometimes we have to work on things we shouldn’t have to.
Our work is beyond the machines. Sometimes it’s inside of them, and we work to keep them working so that they can help us create more of our work. If the machines stop working, we can lose our work. Bits and bytes dried up and blown away. Or flooded and washed away with corrosive water. Something is keeping those bits of my work alive, and I’m grateful to whatever (and whoever) it is.
Vexing problems are sometimes set aside to deal with at another time.
Let’s begin at the beginning. Since moving into the ITCC I have had the pleasure of using some 4K monitors (these Sharps – yes, Plural!) on the new Mac Pro in my video editing room. And yes, I am completely spoiled now. However, you may not understand how a 4K monitor fits into a desktop setup. I have another post brewing about 4K in general, but with 4K resolution brings a new term coined by Apple – Retina Displays.
So how does a Retina display figure into this post? Well, one of the problems (believe me the benefits outweigh the problems) is that some programs don’t know how to handle Retina, or HiDPI mode. HiDPI mode is essentially taking a high resolution and squeezing it into a smaller resolution. In this case squeezing 4K of resolution (3840×2160 or 2160p) down to a 1080p screen.
One example of a program that doesn’t handle HiDPI correctly is MPEG Streamclip, one of our favorite free programs we use to manipulate video. I wanted to do a screencast on the Mac Pro about MPEG Streamclip and it wasn’t behaving properly in HiDPI mode. The playbar was split and a small slice, including the “play” button, was off to the right, like this:
I then had two choices. I could do the screencast at 4K resolution, which isn’t a good choice at this point in time (just trust me), or I could record it on another machine that isn’t using a Retina display. I wound up using a 21″ iMac that has a native resolution of 1080p (1920×1080). The resulting screencast is on YouTube.
The MPEG Streamclip/HiDPI problem was put on the back burner, but I eventually wanted to research if/how I could do screencasts with these problematic programs on my new Mac Pro (actually UMW’s new Mac Pro).
Today I was doing some clean-up on a website that is being resurrected – the Digital Media Cookbook site (yet another post is brewing about that). I was using a program called Image2icon. I wanted to create a new “favicon” for the site and knew that the “Pro” version (It’s $4.00 if you’re interested) of this program would do it. However, it wasn’t working. On their support page, an FAQ entry talked about an issue the program had on Yosemite, the latest Mac OS. It talks about enabling the program to “open in low resolution” mode. After that, Image2icon created my favicon without a hitch.
This got me thinking, is there a “Open in Low Resolution” checkbox for MPEG Streamclip? Take a look below to see the answer:
You right click on the program in the Applications folder and choose Get Info, then click the checkbox for “Open in Low Resolution”.
Two problems solved in one day! And now I don’t have to use another machine for screencasting. I can use HiDPI mode on any machine, including my home machine which coincidentally has a Dell 4K monitor (I got it as a Christmas present) that allows me to use HiDPI mode as well!
Transition. It can be fun. Frustrating. Overwhelming. Invigorating. All those and many other things. For me it’s always about the next step. No one I know is more critical of me than me. One of the “criticisms” I hear – in a total teasing, smack-down, weirdly motivating and one-upsmanship way – is that I, and other DTLTers, should be blogging more. And it’s true. We’re all doing good work and we should be sharing what we’re doing. Especially now.
So in that vein of sharing, I will start with an update to something I’ve gone back to well on many times – The Kit.
For those of you who don’t know what “The Kit” is, it’s basically a live video stream setup that is as compact and portable (and inexpensive) as possible. At the time that I first gave it the name, The Kit was a backpack with a Mac laptop and a Canon video camera. The only thing that didn’t fit in the backpack was a good, sturdy, Manfrotto tripod. What made the live-streaming all work was a piece of software on the laptop called Wirecast. The limiting thing, however, was that multiple cameras was difficult (not enough “inputs” on the laptop – firewire only goes so far). That, and the on-the-fly encoding and streaming that Wirecast was providing, and it would tax the computer well beyond its CPU capabilities. Reliability was an issue at times, especially if we wanted to push the capabilities (something we do in DTLT all the time).
The Kit made its debut in 2011 and remained largely comprised of the basic set of components – until we began to get ready to move into the ITCC. As part of the research I was doing into the production spaces that would be a part of the new building, I came across what were essentially hardware solutions to what the software-based kit provided. In many ways the production spaces of the ITCC are relatively expensive. I still wanted to maintain a relatively inexpensive “Kit”, and of course, keep it portable.
Many weeks ago, Jim Groom asked me if I would provide the live-stream for the 2nd Edition of the Open VA Conference in Virginia Beach. The main reason I said “absolutely” was to meet the challenge of the next version of the Kit, and to provide a multi-camera setup. I’ll be honest, a side reason I said yes was the “beach” part
So enough of the reasons, let’s get to what the Open VA version of the Kit looked like. Here’s a fast and loose video I put together of what I envisioned I would use, and following that a list of the individual equipment used (the actual implementation was scaled back a bit):
BMD Hyperdeck Shuttle Pro – records to SSD hard drives in ProRes format
Furman Power Conditioner – provides clean power
Nady 6-Channel Rack Mount Mixer (not currently used – don’t know if we ever will)
Includes 4U Case, AV cables (HDMI & SDI), ethernet cable
Asus HDMI monitor – used for “Multiview” (Program/Preview monitor).
Canon Vixia HV40 camera (2) – generally any HDMI camera will do – the HDMI cameras are cheaper than any SDI model which start at over $2000.
BMD Mini-Converter HDMI to SDI (2) – we have to use these to get the signal converted into SDI for long runs. HDMI cables over 10’ or so just don’t work in this case.
Manfrotto tripods (2) – Model 055XPRO B w/ 701HDV head – classic solid tripods
HDMI cables (2) – out from camera into mini-converters.
Power Strips (2) – power the cameras and mini-converters.
Backpack – Carry cameras, power supplies, converters, etc.
Computer (and software) to control switcher interface – we use a Macbook Air or Pro Can be a Mac or Windows. Computer connects via ethernet to ATEM switcher.
Also we have Photoshop on this computer as well. Provides On-air graphics editing that can be exported to ATEM Media Player.
Ethernet cable – for above connection.
MagSafe Power adapter for Macbook Pro
Mac Mini – for “computer source” images (slides, web pages, video, etc.)
TP-LINK Wireless router – I set up an ad-hoc network for the ATEM. Various devices can connect to control the switcher, such as iPads, iPhones, other computers.
Live Stream Hardware – we use a Teredek VidiU w/Ethernet cable & HDMI cable – It’s $700 but this makes it so easy to stream to various CDNs like YouTube. Software solutions exist such as Wirecast.
DVI cable – we used this to get a direct view to Mac Mini. Hard to see detail on Mac on the “Multiview” monitor.
Mackie 402VLZ4 4-Channel mixer – great small footprint mixer that takes room audio in then we go out to cameras.
Stereo RCA to 3.5mm – we run this cable from the mixer to the Canon Vixia to provide “system” audio (it runs via HDMI/SDI to ATEM switcher). Obviously individual channels direct in would be better. This requires separate interface hardware to do analog to digital (AES/EBU) for ATEM Television Studio. A whole other conversation.
The Blackmagic Design ATEM Television Studio is one of those unique pieces of hardware that has almost a cult following, and I think with good reason. It is a 6 input (HD) switcher for less than $1000. Nothing else can touch that price point. It has so much built into such a small footprint, and the quality is outstanding. Keep in mind when you buy one of these, there are pieces that need to be added on if you get into a complex production, which the multiple inputs beg you to do. If you’re using only one camera, there’s really no need to use the ATEM. If you’re using two cameras, and you have them in close proximity (using HDMI cables of less than 10 feet) to the switcher, the ATEM is ideal. With more cameras, or cameras at a greater distance from the switcher, you need to do SDI, and hence you need to convert the HDMI camera into SDI. The BlackMagic Design HDMI to SDI Mini-converter does the trick here. For about $300 they turn an inexpensive camcorder into a serviceable production camera. Inexpensive camcorders don’t have the power to send an HDMI signal long distances (again we’re talking over 10-15 feet), so SDI must be used.
I’ll talk more about how I’m using this kit, as well as future iterations, but here’s an example of one of the live streams at Open VA (featuring several of my DTLT colleagues):
I had a bit of unexpected fun yesterday. One of the things (on my long list of things) to explore this summer is closed captioning (subtitling/transcribing) videos and getting a manageable workflow going. As we begin the Fall semester in about 6 weeks, I want to have a plan for implementing transcriptions as a part of the many videos that we will begin to produce in the new building (you know that ITCC thing I keep talking about?). I’m working on that workflow and hope to have recommendations soon.
Meanwhile, I was playing around with the YouTube Closed Caption tool. It looks to be a great way to start the process of getting automatic transcriptions for video, although, as it is the subject of this post – it’s not perfect.
What was particularly entertaining was the attempt by the transcription service to get term Domain of One’s Own, and LMS, correct. On rare occasions it would get the terms right, spelling out the words “domain of one’s own”, albeit in lower case, and the acronym “LMS”. However, it did struggle. Here’s where it got entertaining. It seems to pick on Martha and Jeff the most. First, Domain of One’s Own . . .
YouTube’s struggles with LMS (as in Learning Management System) were equally funny.
As well as saucier versions . . .
And my favorite . . .
The actual spoken words from the above clip are “closed walls of the LMS”. See YouTube Closed Captions can even teach you about geographic locations you didn’t know about – Almazán, Spain. And I never knew about it’s association with Wellesley. Oh, and don’t forget Alamosa, Colorado.
To finish up the fun, there were a couple more transcription errors – one just basically silly, and another one fun in a teenage boy kind of way. First . . .
You can guess what the real spoken words are in this next one . . .
After it’s all said and done, it is amazing what an accurate job this automatic transcription service does. Anyone who has the task of creating captions for a video might find it to be a quite entertaining task. I hope the student aides that I assign to this task think so as well.
This is a follow-on from my last post. Some of us DTLT folks got another chance to see the progress of the ITCC (if you don’t know what that stands for, read more of my blog). As I mentioned previously, I’ve been concentrating on the DTLT Edit Suite, and the progress of that room is coming along nicely. Here is a just a brief sequence of photos to show you where we are.
It started out with the framing:
Then it got walls:
As of June 11, 2014 it’s got paint and carpet:
Most of the equipment for this room has been ordered and will start to arrive soon. One of my next posts will go into detail of the actual equipment setup. Stay tuned.
Imagine the possibilities. My mind is preoccupied with what the Information and Technology Convergence Center (ITCC) will do. I have to think of the possibilities of individual rooms, as well as how those rooms fit into the overall vision. I’ve got my vision for video and audio production in the ITCC and if I could sum it up in a word, it would be “enable”.
We have a video recording studio in the building unlike anything we could have imagined a few years ago. It’s very exciting, and I hope to write more about that space soon. Right next door is a space for editing digital projects (and even a vocal booth for quiet audio recordings). But to me, the whole building is a production studio. There are lots of great spaces to capture (i.e. video record) conversations, and as I’ve said before, it is about furthering digital scholarship.
The space I’m currently thinking the most about is within the DTLT suite. It is adjacent to the “bullpen”, but in many ways I’m thinking of it as an office – a word derived from two latin words, opus (work), and facere (to make). The idea of this room is to serve as an editing suite – a new Mac Pro, two 32″ 4K monitors, a large 24TB raid array, a microphone and new digital 4K camera for recording, along with video switching and routing equipment to, you guessed it, enable possibilities.
The other part of this space is a “viewing” area. Projects can be visualized at any given point in time on a large 4K home theater style monitor (I’m shooting for 70″). At any point in the production process we can suggest elements to add to a project such as music, sound effects, visual effects. Faculty and students (staff too?) will be able to sit comfortably in a space and help make editorial decisions. That’s something else that we couldn’t have imagined a few years ago.
So the space illustrated at the top of this post is a general idea of what I’m thinking. Here is what it looks like as of May 1, 2014:
Here’s another shot taken from inside the room:
And here’s the visualization:
With the help of some software, in this case a program called Live Interior 3D, I can quickly drag in some elements (although they’re somewhat generic – note the huge desktop PC element instead of the Mac Pro) to visualize the space. Don’t you think that rug ties the room together?
I’ve also got a QuickTime VR video (download it for better performance) of the space, again courtesy of Live Interior 3D.
Anyway, this is the space my head is in lately. I’m imagining the space and also thinking about hardware and software that will help realize the visions of members of the UMW community.
One of the initiatives that I am currently working on here at UMW is something called the Digital Media Commons Initiative. Part of the purpose of that program is to get people up to speed with some more sophisticated digital video and audio equipment. We are going to have a full-blown studio in the new Information and Technology Convergence Center, so people will use some pretty high-end equipment in that space.
DTLT also has this thing called “The Kit“, which is a portable “studio” that can be set up in a variety of spaces. Mostly we have it set up in our office with a green screen, and we use Wirecast to control the broadcast (live-streaming and recording). Because of the nature of the laptop, it is limited in terms of the number of camera inputs, computer inputs, etc. We need to shift to the next gear.
The episode of DTLT Today (#112) included above, begins to describe what that next gear is. We needed a full-on switcher with true multiple inputs so we can do multiple camera angles, include computer content such as demoing websites, Skype conversations (or Google Hangouts), playing YouTube videos, and so on. The video is pretty rough, but it goes over some of the components that we used. I’ll let the video itself do the rest of the talking, but I did promise that I would list the equipment that we used, so here it is:
Lately it’s been knee-jerk to Tweet an article that we recommend to our followers to read. I do it with articles, videos and funny pictures all the time. A long time ago, in a place not so far away (right here actually), I would blog about articles that I recommended. It would be a quick post with a link and maybe some short commentary. Blogging is not dead for me, even though we joke about it in the DTLT office. We are not as prolific as our fearless leader, our “Big (Blogging) Toe“.
However, now its time to BLOG about an article. One that I feel is extremely important. I guess, so important that I didn’t Tweet it – I need to BLOG it!
Did you read it? If you did, good. No, great! Now go act. Contact the FCC. Save the Internet before it’s too late. I’m not being hyperbolic. The Internet as we know it, or rather, knew it, is being morphed from what will serve the needs of the public, to what will serve the needs of those few companies that provide services and access to it. With no competition and ever rising prices for access.
To begin our story, the state of Virginia, and other southern states have recently had to deal with at least a couple of nasty winter storms. I write this as my university has closed for the second day in a row courtesy of about 10″ of the white stuff. This most recent storm crippled traffic in the Raleigh, NC area, in the same manner that a couple weeks ago traffic was at a standstill in Atlanta, GA.
Just prior to that storm in Atlanta, we here in Fredericksburg had a storm that dumped enough snow to make “snow cream” (I tweeted about it, as shown above).
When the Atlanta storm hit, not only was it unusual for such a storm to be in that area (though not unprecedented), there also arose a conspiracy. Fake snow was manufactured by the government, so the theory goes. It contained nanobots and involved chemtrails, and even includes a specific warning for people NOT to make snow cream out of it and, well, let me let him explain it . . .
OK, say what you will about this explanation and “theory”, it was brought about by some unexpected behavior of snow in a place where it’s not normal to have it. When a lighter is put to that snow, it doesn’t appear to melt, but instead disappears or even burns, leaving behind some black marks. What is the explanation? Well, let me refer you to this guy . . .
So it’s sublimation. That explains the so called conspiracy. There. done. Further experiments show that the snow does indeed melt just like we expect. Now, sublimation is a term, as this gentleman indicates, from “science” – it is when a solid skips the liquid state and goes straight to a gas. When the snow is heated, as with the lighter, it doesn’t melt. It turns directly into a gas and disappears. Or does it? Here’s the real explanation . . .
This video is a bit longer than the other two videos, so if you’ve got a short attention span, the explanation is that the snow isn’t fake, but it doesn’t sublimate either. What happens is that the snow absorbs the melting water when the flame from the lighter is applied. It is well demonstrated when the snow is put in a heated pan and melts. Water doesn’t appear in the pan right away. What you see is the snowball get more and more slushy (to use a scientific term), until the snowball can no longer hold the water, then water disperses in the pan and eventually we are left with just water.
So be honest with yourself. How many would have been satisfied with the sublimation explanation? Obviously many people were. Imagine my excitement when an explanation was posited that it wasn’t explained by sublimation, but an even more simple explanation of absorption (and also the “soot” is there because of a separate chemical process of burning and hydrocarbons being left on the snow).
The point of this post is to ask “what makes us hold our beliefs?” At what point do we walk away satisfied with our answer? Why do we tend to not go deeper? Is it laziness? Lack of curiosity? The definition of science is, in a word, knowing (or knowledge). But scientists don’t stop. They also know that there is STILL plenty of stuff we DON’T know. They keep going because they know there is MORE knowledge out there.
Thanks to Gardner Campbell’s post about the process of discovery, I was reacquainted with this video . . .
The interviewer asks a question he thinks will garner a simple explanatory answer – What’s going on with two magnets when they either repel, or when turned around the other way, attract each other? Richard Feynman’s answer is far from simple. Gardner goes on to describe the “bad Sunday School technique” where the teacher poses a question that has essentially only one right answer. Why ask the question when it results with a dead end?
He also mentions Jerome Bruner and his approach of not “problem-solving” but “problem-finding”. Now goodness knows that academia is riddled with something known as “problemitizing” or creating a problem out of something that should be straight-forward. It’s the stuff that makes your head hurt after a committee meeting designed to move something forward and someone asks that one additional question, “have you thought about this…?” Thus the ultimate question behind it – “What if we get this wrong?”
One of the money quotes from Gardner’s post:
“For it seems to me that we are tempted to imagine reflection as a process of discovering and affirming lessons learned and problems solved, when anyone who has spent a moment in reflection will realize, I believe, that the depths of that practice awaken conjectures and dilemmas.”
This is the dichotomy. At a certain point we make decisions based on the best information – the information that we believe to be true. But there is perhaps infinitely more depth to the questions we are asked.
I’ll stop here, at least for now, because my head, and probably yours, is beginning to hurt. This all reminds me of this scene from Animal House . . .
I’ll end with two more points. First, go read Gardner’s post. It is one of those posts that I am convinced is leading toward good things. Thinking about thinking.
Second is the question many people asked when Bill Nye debated Ken Ham. Why in 2014 are we still debating Evolution vs. Creationism? Was the question answered in this almost three hour debate? I’d be surprised if there were many people who moved to the evolution side (or to the creation side for that matter). Why is that? Because people believe what they want to. They will live with that satisfaction for as long as they want to. They will either stop seeking, or something will trigger them to continue to go deeper. It shouldn’t be difficult to encourage people to go deeper, but we as teachers sometimes get to the point where we find it impossible not to require it. That’s where a good teacher comes in and is able to encourage it.
Epilogue – So the last of the “rivers” that I mentioned above is a project from Kirby Ferguson that is as he calls it “A serialized documentary about the forces that shape us.” I have no idea what will ultimately come out of it, but it has that hook, for me at least, to want to find out more. If Kirby’s “Everything Is a Remix” is any indication (and why I ponied up 12 bucks), it should be terrific!
I am happy to finally be able to announce that the University of Mary Washington has partnered with MediaCore to run a pilot installation of their media platform. MediaCore will mean many things to many people, but more generally it will give members of the UMW community the ability to control and curate their media collections. It will be more than just a “campus YouTube”.
Speaking of Youtube, we all know it and (mostly) love it. However, when it comes to student media projects, especially when they are incorporating copyrighted material and using it every bit within the fair use clause, they can still get dinged with the takedown algorithmhassle. Students having their own media space is crucial for their experimentation and expression (kind of like Domain of One’s Own). What better way to do it within an educational context and on a platform that is specifically geared toward educational media hosting. MediaCore will serve that function and allow students to share their media work locally behind a login, or make their work public when they want/need to.
The other idea behind using MediaCore is the idea of curating “collections”. Youtube, Vimeo, and even content from TED Talks and Archive.org can be curated by a user to share unique combinations of media elements. It allows the viewer to go to one place to view media from disparate places. “Playlists” can then be easily incorporated into a WordPress site or in Instructure Canvas – what UMW uses as its university LMS.
MediaCore is also “mobile ready”, and from both a viewing aspect as well as an “ingestion” aspect. MediaCore’s Capture app for mobile devices allows a user to capture video and upload directly into their MediaCore space. However, it’s not limited to video that you might have just shot. It includes any media that you have saved to your “camera roll” (I can’t speak to how it works on Android devices, but I imagine it’s similar). So this might include images, screenshots, screencasts, or other video produced in any app that saves to your smartphone.
Finally, what will make MediaCore special to the UMW community is the integration with WordPress (both UMW Blogs and the Domain of One’s Own initiative) and with Canvas. MediaCore makes available a WordPress plugin and Canvas LTI integration that will allow users to post video to a WordPress post or page, and any Canvas area (pages, modules, etc.) that uses the visual text editor. You can also upload video to MediaCore through WordPress or Canvas plugin interfaces.
So we will try to push MediaCore to its limits and see what’s possible. MediaCore support have been very responsive to questions as well as suggestions for new features. So let the pilot begin!