Latency is the delay between when you sing or record a sound, and when it is actually recorded by the app. Android has particularly poor latency (as high as in the hundreds of milliseconds which is crazy) so here’s a way to adjust the app settings to allow for this:
Firstly record some hand claps or other percussive sounds, then press the bounce button.
Without headphones plugged, and with the volume turned up reasonably high, press the record button again.
If (which is almost always the case) there is a latency problem, then your new recording will be slightly before or after the original.
Adjust the latency setting accordingly under the settings dialog.
Repeat the recording (step 2)
After a bit of trial and error you’ll find the recording sitting much closer to the original and that’s your best setting.
I’m with my family in Papua New Guinea at the moment, carrying a couple of Android phones and a tablet with us. There seems to be a growing number of Android phones which gels with my experience of the number of countries represented in my app download stats (about 90). While I’m obviously most interested in how my music apps might be used in this context, there are other issues to be addressed for interface and design useability more generally.
Patchy, low bandwidth data connections
In the village, although there are places with data reception, it’s very low bandwidth and tends to drop out from time to time. Apps that deal with that well are really appreciated. Gmail is one, as it has outstanding offline capabilities. Sharing using Bluetooth is very common here, and is something I hardly ever do at home so with music apps, sharing via intents in a common format is essential. Yesterday we made a really important recording of a family member (born in 1919) talking about land and traditional songs. My app Twotrack records in wav format, and my brother in law tried to send a copy of the recording to an uncle via bluetooth on his phone. The phone didn’t play wav files, hence the obvious need to be able to convert to mp3, but that’s the subject of a forthcoming post…
Harsh light conditions
I’ve used the phone a fair bit while on the water on the way to and from fishing spots so apps with highly visible controls while in a protective pouch are best. Button size and colour makes a huge difference. The record button on “My Tracks” app, for example, was easy to find when I couldn’t really read anything else in bright sunlight this morning on the boat.
When you might struggle to keep your phone account charged with K5 (about USD$2.50) then data useage is really tight. 5Mb can be a lot in that context, so keeping app sizes down makes a lot of sense. Finally, the Asus Transformer tablet we are carrying with us is by far the best computing device I’ve ever brought to the village. I’ve brought a late 1990s Toshiba laptop here, an old powerpc Powerbook and white macbook here, with various charging options (solar, petrol generator, car cahrger). The tablet with its keyboard that has a battery not only lasts for days of use (maybe 12 hours or more with the extra battery in the keyboard) but is good for kids to watch movies, take photos, and connectivity in general (it has both a micro and standard SD slot). Doesn’t take that much to recharge it either, although you’d have to have a decent solar charger, and not the tiny 2000mAh thing I’m using for the phone.
It uses pdlib, an open source library allowing developers to incorporate patches from Puredata into their apps (it’s used by my app Beats and loops, for example). Check out this Pd patch screenshot from the app:
An android port is coming soon apparently; can’t wait! This kind of collaboration between a record label and arts studio Seeper is really exciting. I’m really interested in the space for simpler apps with a wider potential user base/audience (the two are merging/the same?!).
Most people use their thumbs. I read a great post this morning that points out some really significant useabilty things to consider. Designing apps that are fun and effective to use is really important for music production so seeing where it’s easiest on the screen for people to move to is really valuable. Check out this diagram from that post about ease of screen access when using thumbs:
I’ve been watching users more and more on my commute and walk to work, and that seems to be the case. I’m also reading User-Centered Design by Travis Lowdermilk – an O’Reilly book – which is a really useful reminder to involve users more and more throughout the design process. The plan is to incorporate these ideas into future updates and apps.
After some modifications to my homemade iRig (aRig?) that added the headphones as well, I decided to try recording directly from the Meeblip to the phone, then overdub some guitar as a proof of concept more than anything. I sequenced a bass line on the laptop, then played this back through my the Meeblip, with its audio out to the phone:
Then bounced this down to the stereo track and overdubbed some guitar. The latency issue took lots of mucking around with the settings in Twotrack, so I definitely need to add a nudge “feature” to it:
And then bounced it and shared it to Box.net. I added the share facility on the weekend, but need to tweak it as only Box.net seems to work at the moment with the way I’m coding it. Getting there though…
Rough results (done quickly!) posted to soundcloud
Love the look of the iRig, but decided to try the home made option. Quick trip to Jaycar, and for less than $10, voila version 1. Need to add the headphone part obviously, but that can wait. Getting audio in to my android Two Track program successully. Probably easier to buy an adaptor from kVconnection. Useful info there on which sleeves are what – 3 is ground and 4 record on my phone (HTC One X).
The only real reason I keep a mac os x box running at all is so I can use one of the Metric Halo 2882 our Dept has. The flexibility of routing, quality of the pre-amps, and the sound of the various pre-amp and desk modelling adds a polish to a mix that I can’t get any other way at the moment. Yesterday I set up a useful way of routing stems from Ardour to the MIO and back, and was really pleased with the results. I set up a series of stems (drums, brass and rhythm) on separate busses, then added an insert in each routing through the MIO via the RME optical outs (appearing in the MIO as ADAT ins). The results have to be recorded back into a separate track (you can’t render a wav file for export when routing through an outboard processor obviously), but the results are worth it.
The flexibility of Jack and the Ardour system of inserts makes the audio routing in OSX (even if soundflower or rewire are used) look positively ancient (jack is available for OSX though I think). On another note, my new 64Studio box has been running for a week or so, working with audio almost daily and I has yet to register an xrun.
Charles Arthur in a Guardian blog provides some useful perspective on the ludicrous figures often quoted by the music industry in relation to file sharing and lost income. Also led me to a link to the Guardian datastore which looks really useful.
>A couple of months back I decided to upgrade my main desktop and audio machine to the latest version of Ubuntustudio (9.04). Installation from the DVD was a breeze, and rebooted smoothly into the new look. Basic desktop stuff was great, but setting up my audio ran into the first of two main problems. I have two soundcards, the onboard Intel HDA, and an RME card. I like the Intel to be picked up as hw:0, and take the optical out of this into the optical in of the rme card, which I have running jack. That way I can play things from firefox or other standard apps that only look for the first card, and can route it through jack. I couldn’t get this sorted at all, and concluded that pulseaudio was part of the problem. Tried various suggestions gleaned from the net to no avail. Solution – take an analog out and feed it to the MIO2882 that I use for recording under osx (I also use it to mix – amazing box with insane onboard DSP).
I managed with this setup for a while, until I came to use ardour to mark some drum stem mixes from class. Good old alt-m to bring up the mixer didn’t work (well – I had to type it twice). The killer for me though was the fact that the transport shortcuts no longer worked when the mixer was in the foreground. This is an absolute showstopper for me – anyone using a DAW hits the space bar alot and it needs to happen in whatever context (setting levels in my case) one is working in. Found others with the same problem but no solution, so thought I’d try 64studio instead – surely no regular ardour users could stand the transport drama?! I was also getting regular xruns, but I was running things like google desktop and gnome-do so this was my fault really! Didn’t care too much as I don’t record on this machine. Time to separate audio from office things anyway, and another box (identical specs) became available.
Tried the stable 2.1 iso, but it kept failing with a stupid config error regarding tetex-bin. Perhaps my DVD had errors, but the installer couldn’t move on, so downloaded the beta 3.0 iso and installed. Very smooth installation and jack setup instant. Optical out from hda card all good… alt-m in ardour is good and my transport shortcuts work again – yay! No xruns so far either so I’ll be sticking with this solution for a while.
Lesson – once an audio machine is working and stable, I will avoid upgrading until knowing for certain that the things I use regularly are working as expected. No more mixing everyday office and network stuff with serious audio apps as well. I think ubuntustudio is fantastic, and highly recommend it to anyone interested in audio work, but the latest version needs a bit of tweaking. I feel a bit guilty that I’m not helping the ubuntustudio community in a real way (although this sort of testing is useful I guess) but I do a lot of other open source work so horses for courses.
Here’s the setup – I use the macbook as a glorified mixer to control the MIO, and use synergy to share the same keyboard and mouse between machines.
>It’s back to the MusicKit after a break for a few years, and getting it to compile against developments that have taken place in GNUstep requires a bit of work. These notes refer to the trunk of MusicKit as of March 27, 2009.
Errors about common.make require this file to be run to get the environment set up correctly: . /usr/share/GNUstep/Makefiles/GNUstep.sh
portmidi.h: no such file or directory
so need the development librarires for portmidi. On ubuntu:
sudo aptitude install libportmidi-dev
Then we come up against the following warnings which could well be causing bigger problems:
GNUSTEP_INSTALLATION_DIR is deprecated. Please use GNUSTEP_INSTALLATION_DOMAIN instead
So in the GNUmakefile in trunk/MusicKit/Frameworks/PlatformDependent/MKPerformSndMIDI_portaudio I changed: GNUSTEP_INSTALLATION_DIR = $(GNUSTEP_LOCAL_ROOT) to: GNUSTEP_INSTALLATION_DOMAIN = LOCAL
Tried SYSTEM in place of LOCAL too, but still getting problems with the header files of MKPerformSndMIDI not being recognised when the SndKit starts to build. Anyway, that’s where I am for the moment and will try more later…