I was originally going to write a blog post about the conversation topic I alluded to in a few Tweets on the evening of June 29, 2011; however United Airlines changed the topic. This blog post is about the frustration when technology does not actually make things easier. It also gets more frustrating after asking for help when the technology fails.

I wanted to book an award fare to fly myself and the L&T Wife to California on United. So I went to the United website, logged in with my frequent flier number – you know the one that literally has almost half a million miles in the past 11 years. I went through and looked at all the options for flights before finally picking one. I signed myself and the Wife up for it, picked our seats, continued to the payment page and entered my credit card number. Clicked the Submit button, and nothing happened. Clicked button again, nothing happened.

I changed browser from Firefox to Safari and tried again all the way from the beginning I could not save or hold my work. Nothing happened under Safari as well. I then decided to call United Rewards Reservations, which is when the frustration started. This is a basic synopsis of the conversation

“Hello, I am having trouble booking reward travel on the website.”
“When and where are you trying to travel to?”
I respond with the information
“No, there are no seats available for the dates you want.”
“But the website shows many open seats.”
“I am sorry sir the website is wrong.”
“Okay, so what are my options?”
“There is a flight three days earlier for outbound and two days later for the return.”

Whiskey Tango Foxtrot I thought – I did not say it. I was polite to the agent as they are just reporting what the screen is showing.

We go round and round and finally get the exact same itinerary, as I had created online. I did not care if it was a mileage saver fare or not, her system was defaulting to fares that take less miles. If I was asked I would have said, I had picked specific flights online.

Then came the time to make payment. Online it was 75,000 miles per person; via the phone it was 100,000 miles per person. I ask why the difference.

The agent had no good explanation, so I asked for a supervisor. During this time I was placed on hold, without music or other audio so I had no indication I was still connected. The supervisor could not assist me.

As we passed the thirty-minute mark the supervisor indicated I should be transferred to Web Support to assist. After a few minutes with the Web Support person I was able to book my flight.

It was extremely frustrating. I tried to do it via self-service on the web. It did not work. I tried to call for help and that did not work for the first 40 minutes. It took approximately 45 minutes on the phone and three agents to finish the transaction I already had details for. If the first person I communicated with listened to my original issue they might have thought to transfer me to the web team earlier. Instead I believe that they were just going off the script, not really helping the customer.

I tweeted out my frustration and decided to wait 24 hours to see if there was a response before posting. So far I have heard nothing.

Now some people may be thinking that it is only 50K miles, ~10% of your tally. To put the value of that in context, 50K miles is a round trip somewhere in the US with the right planning. Now that this trip is booked, I will get to call again to add my dietary needs as I can’t do that from the website. I think I will wait a day or two.

For those of you that have an impact on customer interaction, think about what happens when your website doesn’t work. How will you help that person? Have you provided them with enough information to know where to go for help? Is the first point of contact going to listen and respond or just follow a script? That one decision can change a customer interaction from a phone call to a frustration and wasting time for everyone involved.

Another airplane flight, another blog post. This one is about the “new modes” of audio delivery. As many of my readers know I work in the audio industry, I do not often blog about it as I am concerned about the impact my comments could have. Not that I would get in trouble with my employer, heck I was looking for a job when I got this one; but more that people would take my comments and opinions as if I was speaking for my employer. So let my blog, my domain, my opinions, written in my nonworking hours and me unequivocally state that these are my personal thoughts and opinions.

The new mode of delivery I am thinking of is digital distribution of audio products. I purchase music as a digital format less often than most people think. The reason is that most delivery methods are compressed. I believe that compression should be applied judiciously. Not all compression is bad, as I sit listening to music on my iPod on a plane. I decided the quality of music is the item I want for this application.

That is the key; the application is that I want to travel with a large selection of music. It does not have to be pristine as the listening environment is less than pristine. I do however want for airplane flights and time in hotels to be able to have music. I do not always know what kind of music I am going to want to listen to three days from now. I would rather have the selection at a compression ratio that I find appropriate.

I am purposefully omitting numbers, as too often when numbers are listed it becomes a contest by numbers, such as one saying that they will only listen to music at 96kHz sample rate. When I ask why, the answer is often well it is a higher number it must be better. I wonder if that person would be able to tell the difference between 48kHz and 96kHz recordings in the listening conditions I am currently in; a tin can traveling through the air at 300mph with an internal ambient noise of 70dB SPL A weighted through noise canceling ear buds. Probably not so easily, I am not going to say it is impossible; I am going to say it is improbable. I believe and can hear that there is a difference between sample rates in other environments.

At the same time, other listening environments that are acceptable applications for compressed audio for some people are not for me. In my car I have CDs loaded in the changer and a smaller election of non-compressed audio files on the attached iPod. In that environment I can hear a difference between the full quality and the compressed audio. I do not listen to satellite radio music channels in the car often as that compression annoys me and I can hear it. For other people they do not find it objectionable.

The key is that I am deciding. I can control how much compression and the amount of data that is important and acceptable to me. Often buying audio products as digital downloads that decision is someone else’s and I might not agree with it. Paying 99 cents for a compressed piece of music that is just for “fun” can make sense. Paying $15 for a digital download of a CD that is compressed as 11 separate songs versus buying the CD for $15 is something I will not do.
Why you may ask? I have done it, and I have regretted spending the money. The digital download has some audio artifacts that the CD does not. I then can also decide if I want to compress the audio to put it in another format. Not only that, I get to decide the compression protocol as MPEG3 is not always the best. If more people had uncompressed delivery methods I would buy more audio via digital distribution.

The key is to use the best test equipment that we have, our ears, to make the decision for yourself. The way I approach it, is your source should be as ideal as possible and then you have the control to decide what is acceptable compression tradeoffs.

Also please remember that one answer is not the right answer for everyone. The amount of compression that I find objectionable might be perfectly acceptable to someone else. So don’t turn your nose up and ruin other people’s enjoyment just because it doesn’t meet your standards. If people are having fun or the message is getting across isn’t the most important parts of audio being accomplished.

And yes my photographer friends the same thing can be said about JPEG compression. I start with RAW and then I decide how to impact the image as I process it to JPEG or other formats.

An Update: The Logitech G13 is no longer compatible with the latest Mac updates The replacement I am using is the Elegato Stream Deck, as it provides cross application features. I was considering a Razer Tartarus V2 as it is Mac Compatible

Bradford
October 4, 2020

Often times the controls for a piece of software are not the friendliest locations for one-handed operation. By one-handed operation I mean one hand on the keyboard, one hand on the mouse. When working in graphic programs I find myself working that way quite often. It could be as basic as a drawing program where I need to use the Z key to initiate the zoom function and then using the mouse to decide where to zoom. Other times it is more complex, such as selecting an image, zooming into a one pixel to one pixel rendering, panning, and then marking the image as a keeper or a chucker. It could just as likely be a drawing program where I am documenting an idea. For my #AVTweeps, just think  AutoCAD.

Recently I found myself being sore at the end of an image review session from unnatural movements. My data management workflow is outlined at previous blog post. However looking at the actual process I began to find lots of moving of the hands. My review process is based around the use of Adobe® Photoshop® Lightroom® (quite the mouthful so Lightroom for short). The program itself is very powerful and does help me manage my images, pictures, and photos. The program lacks some ergonomics for the one handed user.

The way I cull images is I go into the library mode and review the images at a resolution to fit onto the screen. I then quickly look at it and decided if it is a Pick, Unmarked, or a Reject. These selections are done using the P U and X keys. Notice how they are laid out on the keyboard.

Keyboard with PUX highlightied

Not very easy to navigate with one hand. Now let’s say I want to zoom into an area, one can either use the mouse to enter a 1:1 view or press shift and spacebar to enter the same mode, then use the mouse to zoom to areas. I do this to see how much aberration is viewable and if it is in focus, once again I decide if it is a pick, unfledged, or rejected. Lightroom has a setting to advance to the next image after assigning a value to the image.

That setting seems like it would save time, and it does quite often. However if I want to assign two things to an image, I have to back up to the image. If I find an image of the same subject later in the batch that is better than a pick I decided on, I go back to unmarked the previously picked image. So now I have a few options. I can expose the filmstrip at the bottom of the application window and click on it with the mouse and then press U. If this image was just the previous image I can use the arrow keys. If you notice both of these options require me to take my right hand off the mouse and place it on the right half of the keyboard. Now I could also just use my left hand on the right side of the keyboard however that still means changing positions.

Let’s say I want to see if a crop makes an image better. An example of a crop changing an image happened at the baseball game I took pictures at, since I was sitting in the stands some of the images have the back of people’s heads in them. Cropping the heads out made the pictures better, but some were still chuckers not keepers. In Lightroom I enter crop mode by pressing R, this would enter Develop module, where I would use the mouse to make the crop. I would then finish with the crop. I would then want to mark the image as a keeper or chucker. I cannot do that in the Develop mode, I have to be in Library mode. To return to Library mode I would either  take my right hand off the mouse to do the keyboard contortions or move the mouse away from the work area. Neither solution is very ergonomic.

There are keyboards available that are designed to fix some of these issues by changing the keyboard layout and having labels on the keyboard. However some are more expensive than the program itself. Also they are dedicated to the program, so I would still need my regular keyboard for such things as entering text. Not really an idea I was looking for.

I started thinking about it more and more and came up with a more practical solution in my not so humble opinion. I purchased a customizable gamer keypad, a Logitech G13 Programmable Gameboard with LCD Display as it is Mac compatible – yes it is also Windows compatible. (If you decide to buy one after reading my blog, using this link will give me a little commission.) This would let me decide how the keystrokes would be used. I could lay them out to my satisfaction.

I then determined what keys I used most. They are both left and right handed, and some of them require multiple hands, such as entering Library Mode (Command + Option + 1).

Commonly Used Keys on 110 Key Keyboard

These main keys were then assigned to the keypad as I found would work best for me. (Drop me a line if you would like to copy of the configuration file.)

Key Assignment for Gamer Keypad

I had 200 plus images from a business trip and figured that would be a great way to test it out. So I went through the images, did the rating, cropping, and keywording in about an hour including uploading to a SmugMug gallery. There was another benefit that occurred that was unexpected, I was able to hide all of the tool palettes in Lightroom so the images were bigger on the screen during the review, remember bigger is better. I do not have exact times for similar tasks using the “standard” keyboard commands but the important thing is I was not sore and it was not as tiring to me.

The keypad allowed the thing that I think all tools should do, get out of the way and let me work. It did just that. Other than when I had to type in keywords, I used just the keypad and the mouse. I did not have to move my hands around the keyboard and mouse.

I also learned a couple more tricks in the process. I can use the keypad in more than one program, but keep the key functions the same. By key function I mean that the same key that sends an R to enter Crop mode in Lightroom can be configured to send a K in Photoshop or Command + K in Preview to perform the crop functions. The same key press to me, sends different keystrokes to the application. Much easier than having to remember all the different commands, similar to Cut, Copy, and Paste being the same in almost every program. That is a fine example of what I was trying to accomplish; cut (Command + X) copy (Command + C) and paste (Command + V) are not great mnemonic devices at first blush but the arrangement of the keys makes it very easy to use.

As things are becoming more and more automated, I feel that the understanding of the process is being lost. I believe that tools should make my life easier and allow me to spend my time doing other things. However there is a downside, does one always understand the automation that is being accomplished? While these can be great timesavers, what happens when it doesn’t work or you don’t like the results? Understanding the process that the automation process is simplifying is key.

A common example is defining an IP network. Most people simply connect to a network and let a Dynamic Host Configuration Protocol (DHCP) server assign the address. This happens at the office, the home, the coffee shop, pretty much everywhere. When it doesn’t work for whatever reason understanding where to start troubleshooting is a mystery to some. I use DHCP quite a bit; I also do know how to do the entire process manually. I can manually – not that I want to – calculate the subnet network and assign the addresses. When there is no DHCP, I am still able to get connected. If I am still unable to get connected, I am able to call tech support and describe the problem effectively.

While IP networking is a common example it occurs with other technologies as well. I do have an interest in photography and have been doing more processing on images. For some of the process I do it manually, for others I do use automation tool. An example of this process is this picture of Martin Brodeur I took.

Straight out of camera, no processing

I took the shot in a manual mode, shutter priority, I also told the camera where to focus to get Brodeur in focus and the background blurry. I could have accomplished a very similar effect using the Portrait Mode preset in the camera, but I wanted to control the look of the picture. After I took the picture I did some work on it in Lightroom, and Nik Software. In the process I adjusted for the lens, applied a vignette, applied noise reduction, and converted it to black and white. This process was a mix of manual and automated. I could have just clicked a few buttons and called it done. Instead I made decisions along the way, and I understood the impact of those decisions. I was able to decide the final mood of the image as a result.

Processed picture, click to see entire gallery

This result is much better because I controlled the process and got the result I wanted. Did using the automation for part of it save time? Yes it did save time. Since I had taken the time to learn about the conversion process http://www.dgrin.com/showthread.php?t=114917 I was able to understand the questions and obtain the result I wanted. Now if you will excuse me, I need to troubleshoot my network as the Wii is not connecting to the Internet.

Recently I ran across this story http://thestolenscream.com/ about a picture that was taken from a photographer’s Flickr site and was being used around the world. He was not being compensated. It is both an amazing story of how something can go around the world from just being good and how at times people’s work is stolen. The video is 10 minutes long and is well done. The back story and video link is available here at http://fstoppers.com/fstoppers-original-the-stolen-scream/

Notice what I have done above, I clearly indicated where the information is located. I could have just as easily gone into YouTube and gotten an embed link to put into my blog. I also could have just as easily downloaded the video and edited out the credits. But that is an insult to the people who created it. I am basically stealing their time and effort.

I know that some of my readers are more familiar with audio video system integration than with photography. The same thing occurs there and other places as well. It might not be a picture it could be a grounding scheme or a user interface panel just for a sample. Perhaps it is finding information on a manufacturer’s website and including it in your information package. Often manufacturers are okay with that, if you are using the information to sell and use their products. However that does not always happen.

Last year I was very surprised when someone called me to complain about a training video I did that was on YouTube. I was not surprised that I got a complaint, rather I was surprised that it was on YouTube. I did not upload the video there. I uploaded it to my work website. Not a huge deal as it was information about our products, however it then started to sink in. This website had taken someone else’s work, made some edits, and were then presenting it as their own work. They even placed their company logo over the video as well.

Someone else was supplicating all of the time and effort placed into the video. I understand how anything on the Internet is capable of being copied. Basically that was what annoyed me the most was that the effort put forth to collect and present the information was not being recognized someone else was just taking it.

That seems small, no one harmed, right? That is somewhat correct. My company paid for me to make the video and the product was still being promoted. However what happened if it was not a sales tool but rather a picture of a landmark, a presentation about a topic, a system design, or a configuration file for a piece of equipment.

The information is being provided without compensation to the creator or even acknowledgment. Basically that person’s time, effort, and knowledge is being stolen. If it is licensed under Creative Commons terms the creator expects certain respect in the process. If it is not expressly stated that it is okay to use, it should not be used.

The best example is someone who is creating a presentation or proposal and need a picture of a movie theater. I found a nice theater image on Wikipedia taken by Fernando de Sousa from Melbourne, Australia and licensed under Creative Commons Attribution-Share Alike 2.0 Generic license. That license requires attribution. Mr. de Sousa is a professional photographer. He takes pictures for compensation. He shared his work, the results of his skill, equipment, experience, and knowledge. All that he asks for is credit. Will you provide it?

Think about it another way. You went through the process of creating a proposal for a project. You outlined the equipment and process you are going to use. You provided information about why you chose that approach. The person you made the proposal to decides not to hire you. Instead they take your proposal package and use it to create the project themselves. Would that annoy you? Would you expect compensation? How about if all you asked for was attribution?

So I ask everyone to please respect the Intellectual Property, time, effort, and knowledge that is provided on the Internet and provide attribution at least. Don’t take credit for other people’s work.

I am off to go place watermarks on my stuff, if you would like to use an image without it, just ask.

Also known as “The Disconnected Challenge” or “Offline Challenge”. It has become more of an issue since everything has gone to the “Cloud”. What happens when one cannot connect is something to be considered

Bradford
October 4, 2020

Another blog post written at 32,000 feet as that is when the issue hit me. I have various electronic devices as my dedicated reader knows. I have previously talked about various data access connection challenges. This new challenge is not one of my own doing. It is a poor user experience or use case definition. This problem was illustrated by Amazon and their Kindle applications, but it does not apply to just them. This challenge happens to many applications beyond this example.

I have found a time where the electronic delivery of a book advantages outstrip the disadvantages I previously outlined. This happened with a “for Dummies” book. At work, I am on a software implementation team rolling out a new application package. I wanted the “for Dummies” book for the application. I looked at Amazon and the book was available both in paperback and in Kindle form. The Kindle form was substatianaly less expensive, but the key item was I could get literally instant delivery. While on a conference call I was able to purchase the book, take delivery of it, and reference it during the call. It was very powerful and better than using the Internet search tools as it has high signal to noise and no rabbit trails.

The next day I had a business trip, I had my analog reading material and my electronic versions. On the plane flight I started to truly ready my newly purchased book. It was also the first time I had started to explore some of the Kindle application features. I saw that there were sections of the book that were underlined. Not underlined texted, but a dashed underline. I was not sure what it was at first, but I found out that it meant that other readers had highlighted that passage. The idea of crowd sourced highlighting was intriguing for me; it helps to know what areas one should pay attention to.

I wanted to see what other features were available. My brain needed a little break from thinking about business practices. I was going to use that time to browse through the help file and see what other features were available that I might not be using in the Kindle application. I was airborne when I wanted to do that. I had no Internet access on that flight. As a result of not being connected to the Internet the help file was not available.

That seems very counterintuitive, why would an electronic reading application not include a help file with it? Think about that for a moment. Something that is designed to read document while disconnected from the Intenet is not able to read its own help file while not connected. It is not just Kindle that has this design flaw. Cloudreader, Nook, and iBooks for iPad do not have a help file that is readily available. I am sure that I can continue to list others as well. It also occurs with applications for workstations.

Not all applications are that short sighted. Two applications on my iPad have help that is available offline. iAnnotate and DocsToGo install their help file as a document you can read from within the applications.

Makes perfect sense to me. An application that is designed to be portable, should have supporting documentation that is portable. So for those of you involved in the design and creation of applications, think about the user that is not connected to the Internet. They might want to refer to the supporting documents; you should make it easy for them. The fact that I turned to the help file already means that the application is not intuitive enough. Do not compound the issue by making it difficult to find the help.

Also this concept applies to those of you who are creating custom control interfaces using software created by others. On more occasions than I would care to count I have ended up troubleshooting a control system and having to guess. These guesses could range from what are the IP addresses to connect to the system to what the control system is using for the backend to how to get help.

For the application users, I recommend that you try out your applications before you are traveling with them or disconnected from the Internet to make sure you understand how to use it. The help files might not always be available.

Well the fasten seatbelt sign just came on….

<note this post was recreated after a website crash, good thing I backed it up>

Since writing this post in 2010, I have gone awy from JungleDisk. I found that it was using up too many clock cycles in the background. I am now using AWS and ChronoSync.
Bradford
October 4, 2020

I have found a few things out over the past few weeks that I figure I will share with you my faithful reader. I have had a logic controller failure on my MacBook Pro which meant that I was sans laptop for approximately 10 days. The day after I received it, less than 12 hours later, the cable modem at my house failed.

So between not having my personal laptop and then Internet access being a car ride away, I discovered some items along the way.

  • Backing up Data is important, but one also needs access to the data

There are a few other tangential things I have found out as well, such as changes to my photography workflow, online instructions should not be the only instruction, unfettered Internet access can be a key item but those will be separate posts.

Using my backup solutions none of my data was in jeopardy, however using that data was the challenge. I have been using JungleDisk as my incremental off site backup solution. It works very well for me, but has some choices along with it that I was not fully aware of when I made them. Using a block copy approach I could reduce the amount of bandwidth and storage space I use, however this does not come without its tradeoffs. By making this choice I would be unable to browse the files online, I would have to actually restore them using the client software. At the time I did not think that it was a big deal as I figured I could always just install the client on another computer and get all the data back.

A key item here is that it is my off site backup. Too many people think that just having a backup is sufficient. It is not as there are other things to consider than just a hard drive or computer failure. One has to think of other ways that Data can be destroyed: “Someone stole my car! There was an earthquake! A terrible flood! Locusts!!“ Having the data off site makes it much less likely that Data will be lost.

I could have just installed the client on another computer and get all the data back that still was not going to solve all my issues. As a result of not being able to browse the contents, I am going to change my approach yet again.

Some items will be backed up using block copy, other items will be backed up using file copy, and still other items will be backed up to either Mobile Me’s iDisk or to my Dropbox account. You might wonder what data would go to what place and how to keep it all organized, well that is actually fairly easy as long as I make the right decisions when starting. Just by putting files into different locations on my computer they will be backed up in different ways. Placing items into the Documents directory will place them on JungleDisk, placing items in the Dropbox folder will be on Dropbox obviously (still waiting for selective sync before 100% happy with it), and items stored in iDisk will be on MobileMe iDisk.

The key to this approach is to make sure that a file is stored in one location and only one location for live Data. I have often encountered problems where two files have the same name, but different time stamps or on different computers, so how do I know which one is current. Since all of these items are backed up to the “cloud” of the Internet I do not have to worry greatly about the loss of data. I still do backups to DVD and secondary hard drives every so often so that I am not completely at risk. For items that I want to make sure I backup in more than one location, well I have not hit any yet, but using ChronoSync to keep a “Backup” directory in sync is my plan. This will allow me to create a directory in one of the other storage locations that is labeled KeyJDBU (Key JungleDisk Backup items), then I can use ChronoSync to decide what to copy into it and keep in sync.

This approach of also having the key items in iDisk or Dropbox will also allow for the items to be browsable without having to restore all the data. It still does not solve another key issue, do I have the access to the programs to use the data once restored? I found that quite often the answer was no. Most of this situation was my own fault as I chose what format to store the Data in. Once again I could reinstall and have the data back, but that would take a while; especially with the licensing headaches some companies have put in place (that means you Adobe). I am now considering how to handle that issue.

So I use SmugMug to host my photos as they have some really cool features and people there. I also started following a few of them on Twitter, and there was a tweet that just made my head hurt, so I sat down to do the math on it. Okay, I also used Wolfram Alpha to help with it.

The Tweet from Baldy stated:

Whoa! Vincent LaForet‘s new Canon Mark IV vid on SmugMug used over 20 terabytes of bandwidth in 300,000 views in 14 hours.”

So I started to try and figure out how many megabits/second that was so I could compare it to typical network connectivity that I am more familiar with, 100BaseT or Fast Ethernet, and Gigabit Ethernet. Well it just became amazing.

  • First I converted 20TB to megabits
    • 20TB= 20,000,000,000,000 bytes = 160,000,000,000,000 bits =  160,000,000 megabits.
      Yes, that is 160 Billion megabits
  • The next thing was to convert hours to seconds
    • 14 hours = 840 minutes = 50,400 seconds
  • Now to convert to megabits/second
    • 160,000,000 megabits/50,400 seconds = 3,174 megabits/second = 3.2 gigabits/second.

So that is pretty freaking fast at to how quickly the data is coming out.

Wolfram Alpha had cool comparisons to put it in context. It is approximately equal to the text content of the Library of Congress. It is approximately equal to 1/8th of the  estimated data content of the surface web (~~ 170 TB ).

Dang no wonder they are in need of 2 TB  of flash memory for a server. You can see the picture and Don MacAskill CEO of SmugMug here http://bit.ly/3HlXzH

A few days ago I posted a Tweet that said, “signal to noise is important, not just in audio but in life”. That post was an amalgam of someone’s tweet commenting on the palaver at their job result of the amount of Tweets I was getting from one stream. I realize that the single stream is not an indictment of all who Twitter, Twitterers?

I figured I would post here what I learned from a quick study over the past week.  I am following 40 streams, 32 posted something in the past week, there were a total of 522 tweets, or an average of 16 tweets over the past week. However there was one person who posted 208 Tweets in one week, the vast majority of which were very repeative and redundant. Since a picture is worth a thousand words, how much is a graph worth?

40% From one stream
40% From one stream

In addition the person also put down an identifier so that they would trend and are getting much of the information from AlertDeck. So that person is not being followed now. The disappointing part is that they actually have something valuable to say; they have just started adding to much noise in trying to market themselves.

My warning is that marketing via Twitter can be done, but if there is no content everything gets turned off. Stay tuned… I might decide to reveal who the offender is.

Oh yeah, I have also decided that Apple’s iWork’s Numbers ’08 is not very powerful when it comes to collating data as I still had to do much manually instead of just doing a Pivot Table in Excel. I also still can’t activate half my applications…