This post was originally written in 2011. I removed this post as I was an employee of Universal Parks & Resorts Creative, a subsidiary of Comcast from March 5, 2018 until November 6, 2020. This issue still remained an issue as I left the office.

November 6, 2020

As my faithful Twitter reader knows, I have been having some issues with my computer attaching to the network at the office. It has been Outlook locking me out, Windows Domain Server locking me out, IT (Information Technology) changing the network configuration, entire system going down… etc. Some of these issues were due to the configuration changes that IT is making, some were unforeseen, some were just plain dumb luck.

Something that surprises me though is that for how much we like to cast aspersion on IT; sometimes we are our own worst enemy. By we, I mean the users. Not just at my company but pretty much everywhere IT has a love hate relationship with the users, the users love to hate IT. I am not saying that IT is beyond reproach, but some of the decisions we make, often times it makes it worse for everyone.

One of the most common complaints I am hearing is about the speed of the Internet. The next common complaint is the fact that many IT departments limit the streaming or some of the social network options. These concerns and complaints are all interrelated and is a case of size.

Many offices are connected with a T1 connection, which sounds “fast” but in reality it is not so much. The standard is that a T1 is 1.544 Mbps (megabits per second). The typical upper limit on residential DSL is 3 Mbps. Cable is much faster with an upper limit of 30 Mbps. Based on that it is easy to see why people often say, “The Internet is much faster at home.” Of course the first comment is why not just bring in something other than a T1? Yes, it is possible but for most business they are looking at uptimes and guaranteed bandwidth. Most contracts with a T1 or similar service state you will have a level of uptime or availability as well as guaranteed minimum speeds.

Most residential broadband services rate the speed as “up to 22Mbps” or something similar. They also typically do not have a guarantee on your uptime or availability. The Comcast Guarantee does not have a guarantee for availability or speed; the Residential Agreement also does not have a speed or availability commitment, the only credits occur after a 24 hour continuous outage. The business agreement has the same issue of lacking performance commitments.

So if I were running a business would I rely on a connection that might be non functioning for a day with no speed minimum, or would I rather have a higher availability and slower speed? I would take the one with a real service level agreement of what bandwidth and connectivity will be delivered.

The next item that impacts the speed is the amount of people using that connection to the Internet. At your house where you might have speeds up to ten times faster, you will typically have no more than four people using the connection at the same time. Now compare that to a business environment, forty people sharing a connection would not be unheard of would it? Not only is it less bandwidth but more people are using it

So if there are 40 people sharing a 1.544 Mbps or 1,554 kbps connection, let’s divide it equally. It is now each person getting 38.6 kbps. Remember dial up modems at 33.6 kbps? Now one user decides to stream a video, the typical bandwidth options are 300 kbps, 500 kbps, or 700kbps. If the user decides to stream the video at 700kbps they have effectively used half of the entire T1, okay it is only 45% but don’t forget the rest of the content on the page. So now because of one person everyone is experiencing delivered speed that can be slower than a dial up modem. Remember the bandwidth is shared for everyone.

Yes, the same thing happens in hotels, coffee houses, airport lounges … etc. bandwidth is shared.

So if I was responsible for productivity and availability of the Internet at a business, what is the first thing I would do? Turn off streaming. Why? It is a bandwidth hog and there are typically more important things to use the bandwidth on that will directly impact staying in business.

Yes, I still think that many IT departments make decisions that are not helpful to the end-users. Yes, I think that the help desk often doesn’t. I just want to point out that we as the users are sometimes the problem. Please, before you decide to fire up Pandora or Slacker, or surf YouTube think about if you are slowing down others? Don’t be a bandwidth hog.

My solution? I take lunch after most people and stay later than most. Why? Since everyone has left for lunch or for home, I get better bandwidth. I also listen to music using my iPod.

This post was originally written in 2011. I removed this post as I was an employee of Universal Parks & Resorts Creative, a subsidiary of Comcast from March 5, 2018 until November 6, 2020. As a result of moving in 2018 my provider options changed as well as the landscape of bandwidth overall. This issue is still just as important.

November 6, 2020

Over the past few weeks there has been talk about Net Neutrality, including the FCC making rulings. I will be the first to admit that me writing about the issue is a little late, as the decisions have already been made. The decisions are not final and with Joe Lieberman now wanting to be able to turn off the Internet it is time for us to get more involved with the issues.

The item I am concerned about is what happens when Internet access providers start favoring their services over the completion. Now some will say that there is the ability to change the provider of high speed Internet. This issue is not entirely true. Just as one cannot in the United States freely chose which cable television company to use, one cannot freely chose which high speed provider to use. The Internet providers are limited by both technological needs and government mandates. Yes, one can use satellite or wireless or other solutions but it is not always comparing equal delivery of services. Think about the issues AT&T had with traffic saturation and the iPhone.

Currently my options for high-speed Internet access at my home are:

  • Comcast Cable Modem (22Mbps down and 6Mbps up)
  • AT&T DSL (1.5Mbs down/384kbps up)
  • Earthlink or other Dial Up (0.0336Mbs down/33.6kbps up)
  • Hughes Net (2Mbps down300kbps up; capped at 400MB of data a month)
  • FiOs and UVerse are not available

So given these conditions I am pretty sure that all of us would chose Comcast. Also given the pricing structure, Comcast makes the most sense financially. Now Comcast has some programs in place to provide additional services through them for their customer’s use. Comcast offering Mozy is an example of extra services.

From the Comcast press release: “Comcast High-Speed Internet customers automatically receive 2 GB of storage included with their subscription. This amount allows for storage of up to hundreds of photos, music files, or thousands of documents. Comcast also offers a 50 GB storage plan for $4.99 monthly or $49.99 annually, and a 200 GB storage plan for $9.99 monthly or $99.99 annually.” The webpage outlines the basic examples.

I knew that I needed more than 2GB of backup. I wanted offsite storage in addition to backup. The differences can be subtle between storage and backup, but that is another blog post. After looking at the options I decided to use JungleDisk, it is less expensive per month and has other features I want.

One can easily see how JungleDisk is competition to Mozy. They offer similar services and both require high-speed connectivity to work effectively. What happens if Comcast was to decide to put priority on the traffic to Mozy and degrade the traffic to JungleDisk?

The issue of how one selects a service becomes much more complex. If the bandwidth I am using to connect to JungleDisk is throttled back wouldn’t that change my experience and cause me to think about another solution. All of the sudden Mozy would be much more of an option as a result of being much faster for me as a Comcast user. Having a backup take an hour instead of two hours can be a very big deal – especially if one is trying to backup data before leaving on a trip.

Now you might say, under what guise would Comcast throttle traffic like that, “network management”. I easily see a situation where Comcast would decide that backups running at 2AM on everyone’s computer were causing congestion. The first solution any reasonable business is to make sure its customers and partners’ experience is optimized to keep the complaints to a minimum. The majority of the users might be using Mozy since it is included and I would be in the minority using JungleDisk. So the decision made to correct the problem for the majority by providing priority to Mozy would make sense from a customer satisfaction evaluation. I am glossing over the way that this management can be done, it is not just how data is transmitted to my location it is also how the traffic is transmitted across the interconnections of the Internet itself.

Due to the partnership between Mozy and Comcast and possible bandwidth management, Mozy might gain me as a customer while JungleDisk would lose me as a customer. Beyond that I would lose as a consumer as the choice I made would be compromised. I would have to look at the ability to use the service not just the price of the service.

This issue can be applied to many other products, virus protection software, website hosting, picture hosting, voice services. Yes, Vonage and Skype can be blocked and already have been blocked by Internet Service Providers. The same ones that offer phone service. The FCC did require the voice services to be unblocked.

To paint with a very wide and absurd brushstroke, it would be akin to the electric company also selling light bulbs. Of course their light bulbs work better for most users. They did not allow for people to tailor their light bulb choices as the power was optimized to work with the electric company’s bulb vendor. So to get effective lighting, the user is relegated to purchasing what the electric company is selling even if it isn’t the best solution for them.

Let me know if you want me to talk about Comcast now having NBC/Universal content. I am sorry why is Netflix or ABC or Fox or Hulu or …. streaming so slowly?

So when people talk about Net Neutrality, it is not just something for the technophiles. It can impact anyone who uses the Internet.

This post was originally written in May 2017 and posted at I am reposting this piece as I believe it is extremely important. It is more of an issue with the COVID pandemic and the amount of children learning at home.
Bradford – September 17, 2020

Many of you know, I am a proponent of online privacy. Recently I received an article about the implications of Educational Technology (EDTech) and the use of it and how it impacts privacy. While I don’t have children, I believe that their education is important. Part of that education is learning about privacy, what is appropriate for online, and that surveillance is not standard.

Much of this information is sourced from the report “EFF Releases Spying on Students Ed Tech Report” by the Electronic Frontier Foundation. I found various things interesting, a child enrolled in Google Apps for Education (GAFE) much of the privacy decisions are taken away from the parent and given to the school system through the GAFE administrator. Under the agreement, Google makes with the school system many of the decisions are made for the student by the education department without checking with the parent. Some will say I am a cranky old person with this next phrase, “When I was in school, we needed a permission slip for a field trip. Now the school is deciding the online presence of their students – without any permission.”

The school system can create a Google account with personally identifiable information for a minor without the parental consent. If the parent (or guardian) asks for the information to be deleted, it is the decision of the school administrator whether or not it will be honored. Yes, the parents don’t get to chose. There are hundreds of pieces of education software or services in use. There are multiple terms of service and privacy to review for these services; I do not want to think about how long it would take to read these agreements. Some of these services are owned by Google and will share information with GAFE. Once again the majority of these services can be configured by the school system, not the parents.

The EFF has collected case studies to help illustrate the concerns and challenges. You can find them here

Right about now you are asking why I am talking about this topic. Many students are learning from home.. There are various software and technologies being used. Not many will think about how the privacy of students is considered. Asking a question such as, “Does this require signing up for an account?” or “Can one plug a USB storage device in or use a local network connection?” These simple questions can assist in the evaluation of the solutions.

Just as one would ask about security for a corporation or a government project, one should think about it for education and their home network. More often that it should occur, the technology provider is helping to educate the schools to understand the complex issues of using newer technology. Are you ready to ask questions?

Think about how you would feel if your child is being watched by Google without your permission. Not just teenagers, children just starting school.

Hat tip to EFF for their open source student privacy logo

My involvement with the EFF and AVNation have also included comments about privacy: AVNation Privacy & EFF Mail Links.

Something I realized while thinking about this subject is that if one sends very few encrypted e-mails, the ones that are encrypted will stand out in the mail being sent. Now you might wonder what I am doing that requires encrypting. It is more practical than you might think, a simple example is to transmit financial information.

I have an additional reason now, confuse the government and anyone else monitoring traffic. This idea is discussed in Cory Doctorow’s book Little Brother section below is used under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 license. This quote below came from line 1826 in the HTML version available on Mr. Doctorow’s website.

“So how come you weren’t on Xnet last night?”
I was grateful for the distraction. I explained it all to him, the Bayesian stuff and my fear that we couldn’t go on using Xnet the way we had been without getting nabbed. He listened thoughtfully.
“I see what you’re saying. The problem is that if there’s too much crypto in someone’s Internet connection, they’ll stand out as unusual. But if you don’t encrypt, you’ll make it easy for the bad guys to wiretap you.”
“Yeah,” I said. “I’ve been trying to figure it out all day. Maybe we could slow the connection down, spread it out over more peoples’ accounts –“
“Won’t work,” he said. “To get it slow enough to vanish into the noise, you’d have to basically shut down the network, which isn’t an option.”
“You’re right,” I said. “But what else can we do?”
“What if we changed the definition of normal?”
And that was why Jolu got hired to work at Pigspleen when he was 12. Give him a problem with two bad solutions and he’d figure out a third totally different solution based on throwing away all your assumptions. I nodded vigorously. “Go on, tell me.”
“What if the average San Francisco Internet user had a lot more crypto in his average day on the Internet? If we could change the split so it’s more like fifty-fifty cleartext to ciphertext, then the users that supply the Xnet would just look like normal.”
“But how do we do that? People just don’t care enough about their privacy to surf the net through an encrypted link. They don’t see why it matters if eavesdroppers know what they’re googling for.”
“Yeah, but web-pages are small amounts of traffic. If we got people to routinely download a few giant encrypted files every day, that would create as much ciphertext as thousands of web-pages.”

My action is a relatively small action and is rather simple to do. However, the fact that it will change the traffic view could be helpful for others. It will prevent other PGP/GPG encrypted traffic from being such an outlier as to be noticed. As EFF posted on Data Privacy Day, privacy is a team sport. There are additional directions for how to do this task at, hover over the tutorials section. If you want to test if it worked, My public key identifier is C93A52C6. You can download my public key from directly from my site. I also will freely admit, I am not sure if it will make a difference, but it could not hurt.

Originally posted at on January 11, 2018

As many of you know, I was laid off from Harman Professional on December 21, 2017. This column is about what I have been doing since then. If you are looking for a post talking bad about Harman or spouting all sorts of venom, keep looking. This article is not the article you are looking for. I also am not going to be using this as my personal blog talking about playing in the snow and going to the movies … etc. This article is about what I have been doing within AVNation.

As some of you know, Tim takes an annual vacation during the Holiday Season. This year, he made a mistake. He left me in charge, and in fact deputized me to make improvements in our processes; all processes. We have cobbled together over the past six and a half years AVNation’s processes and data management on an ad hoc basis. As we needed things we added them. Sometimes without thinking toward the future. (That is the polite way of saying they didn’t ask me.) As a result various pieces of data are stored on different services or someone’s hard drive. In addition there were things besides Sugar Plum Fairies in Tim’s head. He had agreements, show topics, projects, blogs, and thoughts of the Bears. I put on my digital janitor outfit and went about the organizing and cleaning.

I am confident we are not the only entity that needed to do this process, every company should go through this process on a regular basis. It frees companies from the quagmire from the phrase of, “We have always done it this way.” Instead it allows people to ask, “Why do we do it this way?” I know from experience there are times new software is implemented so that it acts just like the old software, whether it is a best practice or not.

The first question I asked myself, was “Self, what makes doing AVNation stuff hard?” I came up with a list. Some of these might also be an issue in your organization. The list was longer then I thought it would be. I also included comments from Tim as we were evaluating our software contracts.

  1. Stuff is all over the place!
    1. Slack
    2. Trello
    3. Google Drive
    4. Dropbox
    5. Amazon S3
    6. Email Threads
    7. Google Hangouts
    8. Google Chats
    9. Zoom Meetings
    10. People’s personal computers
    11. Internal wiki
  2. Scheduling meetings
  3. Helping people with their email
  4. Collaboration tools
  5. Managing shared tasks
  6. Project Planning
  7. Where source files are located
  8. Who are the active underwriters
  9. Finding scopes of work for each underwriter
  10. Locating logos for each underwriter
  11. Poll topics & schedule
  12. What is the editorial calendar
  13. Documenting the process of…
    1. titling podcast episodes
    2. posting an episode
    3. getting reimbursed for money I have spent
  14. Making the other team members aware of website stuff
  15. Letting other people tell me about website issues
  16. Tim’s List
    1. Email Lists for newsletters
    2. Management Tool for newsletter subscriptions
    3. Landing pages for links in newsletters or for other special events
    4. Customer Relationship Manager for the Sales/Underwriting portion of the business
    5. Intergration with e-mail – nice to have
    6. Tracking lists – nice to have
    7. Tracking Deals – nice to have

I am pretty sure that evaluating other companies, most people would come up with very similar lists. I know that we are not the only ones trying to figure out these issues. These issues are not unique. So I did what every person does, I went to the Googles. Actually I use Duck Duck Go, the reason is simple: “Our privacy policy is simple: we don’t collect or share any of your personal information.” I started searching for “collaboration software”, “file sharing”, “revision control”, “group scheduling”, and “Shared Tasks Lists”.

There were lots of solutions available via open source. Some of these looked promising, some were not worth the time it took to download them. Having used various tools in my career I kept comparing everything to Microsoft Exchange for many of the features; email, contacts, calendaring, task management, shared calendars, shared tasks, shared contacts. I went in search of a hosted Exchange solution. One that we can afford, which is where Software as a Service is helpful.

Previously I had used Rackspace e-mail for my personal mail. I was happy with it. The reason I stopped, Apple broke it. TApple did an iOS update and e-mail longer was a push. I know their support is excellent. I also know it would solve group scheduling issues as part of the service. I also knew that they offered hosted Microsoft Exchange. Off to Rackspace to order up Exchange. Then came the surprise of the day, SaaS came through again. For the same price as hosted Microsoft Exchange, one could subscribe to hosted Office 365, including Exchange and SharePoint 365, you can see it here.

I reviewed my list comparing it to the applications within Office 365.

  1. Stuff is all over the place! – OneDrive, 1TB per user
  2. Scheduling meetings – Exchange calendering
  3. Helping people with their email –
  4. Collaboration tools – SharePoint, Planner, ToDo, Skype for Business,
  5. Managing shared tasks – SharePoint and Exchange
  6. Project Planning – SharePoint templates and sub apps
  7. Where source files are located – All in OneDrive can manage in SharePoint
  8. Who are the active underwriters – SharePoint
  9. Finding scopes of work for each underwriter – OneNote or SharePoint
  10. Locating logos for each underwriter – OneDrive
  11. Poll topics & schedule OneNote or Sharepoint Page
  12. What is the editorial calendar – Exchange or SharePoint
  13. Documenting the process of… – SharePoint, OneNote
  14. Making the other team members aware of website stuff – SharePoint, Yammer, Exchange
  15. Letting other people tell me about website issues – SharePoint, Yammer, Exchange
  16. Tim’s List
    1. Email Lists for newsletters – Excel, Word, and Outlook
    2. Management Tool for newsletter subscriptions – Excel
    3. Landing pages for links in newsletters or for other special events – WordPress already being used
    4. Customer Relationship Manager – SharePoint, Exchange, OneDrive
    5. Intergration with e-mail – Microsoft Flow
    6. Tracking lists – SharePoint or Exchange
    7. Tracking Deals – SharePoint

Yup, Microsoft had done their research and figured out what most businesses need. They had lots of ways to work built in, as well they had recommended practices that made things work easily. They also added solutions for when they didn’t have the proper solution. Microsoft includes Microsoft Flow, an easy to use automation tool. That was where it really shines. Automation that includes connections to other services that we use. For example, the e-mail list solution using Microsoft Office is not as powerful as MailChimp, we can create a link between our Microsoft data and MailChimp easily. The same for SurveyMonkey, Twitter WordPress, our main website engine. It also has allowed Tim to customize his workflow process to meet his needs; I do not have to make any changes to my workflow or the base solution.

Through this review I found out lots of things. I found cost savings, almost US$10K. The vast majority of this cost from ending our subscription to our CRM and Social Media Management service. The service is powerful, too powerful and does not actually meet AVNation’s needs. It would not have been realized if I hadn’t sat down to review our needs, our process, and our software to see if they all still align. Ours did not. I was able to come up with a solution that simplified our day to day operations (one location for information) and saves us money. Spending the time to review how we are working is something we can all probably benefit from. Both on a personal and professional level.

Now if you will excuse me, I have to document the changes so that the rest of the knows what I have done. The great thing is that I know exactly where to put it now.

On Sunday, May 7, 2017 John Oliver told his audience about Net Neutrality. During his 20 minute segment he indicated that will redirect people to the FCC page to leave comments. You can viewthe video clip, approximately 20 minutes long and definitely R rated and NSFW, at Continue reading “FCC Declares DDoS, I declare Shenanigans”

With apologies to Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb

Let me put in the disclaimer first, this blog post as well as everything on this site are my opinions and do not reflect the opinions of my employer or anyone else.

One of the interesting things that has been occurring recently has been people around me talking about CTS certifications from InfoComm. It has ranged from ribbing from people who have certifications to people questioning my knowledge base in Audio & Video. I appeared on a Tech Chaos Podcast to discuss this topic during March 2016. During the InfoComm trade-show in June of 2016, I had heard enough. The breaking point was one of my colleagues when I did not know something off the top of my head, said well if you had a CTS maybe you would know.

With me being the sarcastic and acerbic person I am, responded by saying there is only so much RAM to hold information and that question at hand can be looked up as I pulled out my handheld device. The question was how does one calculate the viewing distance from a display. I then asked a question that is just as relevant in today’s AV world, were two IP addresses on the same subnet mask? Yes, I was being petulant, as I said I am sarcastic and acerbic. Basically someone questioning my knowledge base because they had a CTS certification and I didn’t rubbed me the wrong way. As the ribbing continued, I brought out the fact that I teach classes that qualify for Renewal Units (RU). (To maintain a certification, one must acquire 30 Renewal Units every three years.) The volley want back and forth, until I finally pulled out the sledgehammer and asked how many projects that they had designed, fabricated, installed, configured, and commissioned that were the lead story on the national news. It got very quiet.

I was able to formalize my thought after that discussion, many certifications simply indicate that someone can take a standardized test effectively or has sat through a class with not testing. I will give credit to InfoComm for pointing out that certification doesn’t guarantee competency. From the webpage Certified Professionals Directory:

Certification is not a guarantee for performance by certified individuals. Certified Technology Specialist™ (CTS®) holders at all levels of certification have demonstrated audiovisual knowledge and/or skills. Certified individuals adhere to the CTS Code of Ethics and Conduct and maintain their status through continued education. Certification demonstrates commitment to professional growth in the audiovisual industry and is strongly supported by InfoComm.

Chuck Espinoza and I had a discussion about the certification and the process during InfoComm 2016. He made some interesting points, so I decided I was going to sit for the certification.  It would not be equitable for me to have an opinion without having a better sense of the process. Perhaps the other way to look at it, if you want to defeat your enemy learn to sing their songs.

I showed up at the appointed time and was shown to my test computer. The multiple choice test is administered via computer interface at an independent testing center. That makes good sense allowing the test to be taken easily by many people throughout the world. Any test is a combination of testing an applicant’s knowledge as well as their acumen at test taking. During my career I have taught classes for certifications and have also been the creator of the content and testing process. One of the things that I always stress to my students is select the most correct answer if they are not sure. I will follow the non-disclosure agreement I accepted as part of the testing process (yes, I am one of the people that reads the agreements before clicking accept) and be somewhat vague in my discussions.

As one can probably ascertain, I passed the test on the first attempt. However I learned quite a few things that I did not know. I did not know the standard symbols used in a Gannt Chart, despite having read them for over 20 years. I was not sure of the proper time to deliver a bid document package, but most of the projects I have been involved with had documented bid dates and processes. I could deduce what connector was a video connector, despite the fact I would not be able to identify it in the field. I also realized that the test is not solely about certification in technology  but includes other items that are deemed good practices by the committee. To me that is where the certification started to diverge and I saw how this testing process might not be the best evaluative tool. I also realized at that point having the CTS certification be a prerequisite to attaining a CTS-I (Installation) or CTS-D (Design) is not appropriate.

A great installer might know nothing about the sales process, she knows that when there is a question about new additions or pricing to bring in the sales person or project manager. She could be capable of determining how much to derated an wire rope based on the angle of pull in her head. She might pass the CTS-I test with flying colors on the first try, but stumble during the CTS certification process. A Designer might not know how to read a Gantt Chart, but if the project manager keeps the team informed of the deadlines, it is not an issue. The Designer might not be aware of the procedure for service calls, but that is not his skill set. As a specialist, one should not have to take the generalist test first.

My opinion though is a little mixed now about the CTS process itself. I took the test without studying. I did not even open a book, I simply took a practice exam, paid my money, and took the test. I passed. That is reassuring as I have had a career in the AV industry for over 20 years. I was also surprised about the content itself and how much in my opinion it has to do with the full industry. The fact that the testing agency I took the test at said that they have about a 66% failure rate, also told me that I need to reevaluate the measure of the test. I am not hiding the fact that I hold a CTS certification.

I do however standby the point as InfoComm has pointed out, just because one passed the certification test it does not mean that they are qualified. I also know that there are challenges in the continuing education or renewal units (RU) process. Many of the RU classes are simply attend and get the units, it does not prove that anything is being retained. However that is for another blog post.

Here is my certificate, since I don’t have a digital badge yet. Listen to AVWeek, Episode 258: Throwin’ Shade for clarification about that reference.

Bradford Benn's CTS Certification from InfoComm

When traveling, I highly recommend bringing along a power strip or a power cube. Yes, we all travel with lots of electronics so this device comes in handy but there are other reasons as well. I use a Monster Outlets To Go Power Strip and its companion option with a USB port in it. The reason I like this device is that it is small, but it also has a power indicator in it. The rear of the male plug glows blue when there is power. In hotel rooms you would be surprised how much this simple feature helps.

The reason I like the power cube or Triple Outlet Adapter is that it helps to offset transformers that cover more than one outlet.

Another thing to consider is that the airplane in seat power outlets often times have a spring switch that needs to be despressed to provide power. I have been very unsuccessful with the typical USB power adapters as they are very light and get pushed out easily. Using the power strip or cube to provide additional mass has helped to keep the switch engaged.

As I was taking pictures this weekend I thought about how I want people to be able to use my content and thoughts. Part of this was sparked also by my recent appearance on AVNation’s AVWeek Podcast Episode 189: I Know Who To Call. Tim brought up some topics that I have both experience with and opinions about, so I shared them with everyone. I was pleasantly surprised when I was also quoted heavily in an article on Commercial Integrator as well. So it got me to thinking, what are the rights I want to reserve or share? I am currently listening to Cory Doctorow’s book Information Doesn’t Want to Be Free: Laws for the Internet Age (hardcopy). This link is to the self-published audiobook read by Wil Wheaton. One of the things I am learning from these thoughts is the question of how much do I want to share my created content.

I have already created some content obviously you are reading some now, I have have images available at that can be used. I have until now been keeping a tight leash on the images with watermarks and right click protection. I plan on keeping some protection in place, not quite sure how much yet or how it will be set up. However I want to share the information and experiences with more people. Yes, I would like to earn some money along the way, however at the moment that is not the key goal for me. I want to create things and put them out in the open for people to enjoy. I just want to know when things I create are being used.

So having said that you can see the description of the usage rights I have created at the page The idea is that if you are using my content for personal use, you may do that with attribution. If you want to use my content for commercial use, there is still licensing issues to be discussed. I encourage you to consider how you want your content handled, keeping in mind that many of the tools we are open source and are being shared as well.

I recently purchased a pair of high performance headphones. Not high performance by brand but by specifications. Yes, I know how to read them. They are rated for 10Hz to 30kHz and they are relatively flat, with variable tuning plugs to change their response curve acoustically. I decided to give them a little test run using various material in my home system. I was surprised by the results. Now first allow me to explain and indicate that this is by no means a double blind test. Yes, there are lots of things I could have done to improve it but I was still surprised with the results.  I am purposely trying to leave product and brand names out of this post as the desire is to talk about the signal flow and process. It is very easy to get into the debate of is Brand A or Brand B the better product. Instead I am just talking about the signal chain and process. I am also going to not share the number thresholds I found, each person’s needs and opinions will be different. The entire point of this is to not let the bigger number be the better number, just because it is bigger.

The basic signal flow was the following:

  • Source – 44.1kHz 16bit WAV files (1411kbps), MP3 153kbps (VBR) files, MP3 320kbps files
  • Playback Engine – iMac based, Digital Audio Workstation Software (Adobe Audition CC 2014 & Audacity)
  • Output Device – USB connected Digital to Analog converter running at 44.1kHz locked to computer sample clock. (D/A (24 Bit) 106 dB typical, A-weighted, 20Hz – 20kHz via headphone output
  • Headphone Output – 1/4″ TRS for stereo converted to 3.5mm TRS via passive adapter
  • Headphones – 10Hz to 30kHz passive devices in ear style with acoustic tuning plug at flat

So I tried a few different sample tracks. Ones that I had extracted as WAV files, same file extracted as MP3, original WAV files purchased directly from the artists, MP3 encoded by the artists. They were all well produced tracks ranging from full band rock and roll  to acoustic pieces. The artists ranged for well established musicians (Peter Gabriel, Nine Inch Nails, and Robert Fripp) to less well known musicians (Jonathan CoultonMarian Call, and California Guitar Trio). These were all tracks I am familiar with. What I would do was import the files into the audio editing software as stereo tracks, both the WAV files and MP3 files. I would place them on adjacent tracks that I could exclusively solo (screen captures at end of post). The sample rate of the project was set at 44.1kHz and 16bit to minimize coloring by the audio software resampling. This configuration allowed me to play both the MP3 and WAV track simultaneously and switch between them easily. The switches were typically very fast and with little artifacts. I found that the numbers of bits flowing had less of an impact than I expected at higher rates. I really like that many musicians are providing uncompressed formats, but Marian Call gets a gold star for providing me WAV files. (If you listen to her stuff, the typewriters are not sound effects they record them as part of the process.)

I was able to tell that there were differences between the compressed and uncompressed formats, no matter what the bit rates were for the MP3’s. However what I was more surprised was how subtle the differences were between each step or file in the process. I then took it a step further, I took the same WAV file I extracted from a CD as well as a purchased WAV file and created different MP3 streams. ranging from 320kbps to 32kbps. I used a batch converter, I did not go in and tweak each encoding as can be done with better audio editing software. I then loaded up all the files into both editing software packages and once again went through and used the exclusive or simple solo feature. I saw surprised at how far down the sample rate could be set before I found the music quality objectionable. This value changed based on the material I used. Let me say that again, the minimum bit rate value I found acceptable changed based on the material being used. Multiple points of diminishing returns were found. Yes, I understand what the numbers mean and how more data is typically better. But at the higher bit rates with good converters the differences were smaller than expected. As soon as I crossed a threshold, it was a point of no return. The number was lower than I expected as I had been applying my knowledge of the encoding processes previously, now I was just listening as objectively as possible. it also varied by the material as I indicated.

If I am listening to a podcast, does it need to be 44.1kHz, 16 bit, stereo for the human voice? I don’t believe so. Especially as most podcasts are just the human voice. Voice over IP studies have found most vocal information is in the 3,500Hz and under range. Transmitting at the higher sample rate is just wasting bandwidth and storage for the listener typically. But that is a discussion for a different time.

I still find and believe uncompressed audio files to be the best. Especially if one is tuning and adjusting an audio system. There were definitely shifts in the tonal and temporal qualities of the music. However for listening while traveling or as background sound, perhaps the lower data rates are the proper solutions. I do know that for my travel selection, encoding down to a more reasonable file size makes sense. I can place lots of music on the portable player. I am listening in an environment, especially when on an airplane, that is less than ideal. Yes, I am still keeping my music library as WAV files, yes those are my preferred format. However when I want to load up 8,561 songs for a ten day business trip onto my music player or laptop I am sated (not satiated – yes, there is a difference) with downconverting to MP3’s. I will still travel with WAV files for critical listening as well, often on an optical media.

So I encourage you to try this yourself. There is open source software such as Audacity that I used so that you can do your own tests. However like me, I believe that you will find that picking quality by the numbers of bits flowing is not always providing a full or simple answer.

This image shows the way multiple tracks were stacked and then individually soloed in Audition
Adobe Screen Capture
This image shows the way multiple tracks were stacked and then individually soloed in Audacity
Audacity Screen Capture