tl;dr: You want $TERM to be screen-256color when tmux is running, and you want it to be xterm-256color when tmux is not running. Also, launch tmux with -2 argument.
I love tmux. It is the primary reason why I switched from using gVim to console vim. I love having a fully terminal-based workflow. It beats switching between a GUI editor app and terminal window any day.
This switch, however, was not without some issues. Here are the solutions to two that I encountered.
Weirdness with zsh, tmux, and vim
when $TERM is screen-256color but tmux is not running, zsh will echo your command into the output when you hit Enter:
Notice how the output of the “ls” and “echo” commands repeat themselves in the output stream as soon as I switched to screen-256color.
when $TERM is xterm-256color while tmux is running, colors will not display properly in Vim:
vim /etc/default/grub while TERM=screen-256color:
vim /etc/default/grub while TERM=xterm-256color:
In my zsh config (~/.zshrc), I set xterm-256color to be the default TERM, but right after that, added a command that would re-export TERM as screen-256color if tmux is running:
No Vim colorschemes when tmux is launched by terminal app in place of shell
I ran into a specific set of circumstances where my Vim colorscheme would not display.
Terminal applications usually launch a shell by default, but some (like gnome-terminal) have the option of defining a command to be run rather instead of the shell.
If I set this command to tmux, tmux would indeed launch. However, if I then ran Vim, the colorscheme would not display correctly.
However, if I allowed gnome-terminal to launch a shell, and then ran tmux myself from that shell, Vim would display properly within that tmux session.
I got my clue from this StackOverflow post. Basically, what is happening is that, when running tmux from within my shell, which is configured for 256 colors, tmux would launch in 256 color mode. But when I had gnome-terminal launch tmux directly, it would not.
The easy way around this was to use the “-2” argument for tmux, making the command tmux -2.
With that command in place, tmux launches whenever firing up gnome-terminal, and does so in 256 color mode.
I tracked it down to this issue, which pointed out that the error was related to the hyphenated name ending with a Ruby reserved word.
By convention, Rails uses underscores for word separation in file names. Hyphens are not completely disallowed (the above works if I rename the file to end with a non-reserve word), but can lead to issues.
The issue linked above contains a pull request for a better error message, which was merged into Rails 4. But for those still on Rails 3 and earlier, if you see this vague error message, now you know why.
tl;dr: Things not behaving right in tmux on OS X? Install reattach-to-user-namespace
Are you a tmux + Mac OS X user? Have you had any of the following problems?
Running launchctl to start services fails with a message like launch_msg(): Socket is not connected
Using the OS X Pasteboard commands pbcopy and pbpaste and having them not work
Launching a GUI app from the terminal and getting a “ghost window”: the app window loads in the background, with no dock icon, cannot be Command-Tab’d to, and the app’s menu does not populate the top bar when the window gains focus
Why does this happen? Chris Johnsen has some details…
tmux uses the daemon(3) library function when starting its server process. In Mac OS X 10.5, Apple changed daemon(3) to move the resulting process from its original bootstrap namespace to the root bootstrap namespace. This means that the tmux server, and its children, will automatically and uncontrollably lose access to what would have been their original bootstrap namespace (i.e. the one that has access to the pasteboard service).
It turns out that Apple has patched the version of GNU screen that they distribute with OS X to avoid this problem. But this is 2013, and we UNIX geeks have moved on to tmux, right? Chris goes on in that README to explain why porting Apple’s screen patch to tmux would be tricky.
So, instead, he provides the reattach-to-user-namespace wrapper program. This allows us to launch a process and have that process be attached to the per-user bootstrap namespace, which, to put it simply, makes the program behave as we are expecting.
The “trick” is to configure tmux to launch its shells with the reattach-to-user-namespace wrapper. By doing that, the shells tmux launches, as well as everything those shells launch, will be attached to the user namespace, and the problems listed at the top of this post will no longer be issues. We can use the default-command option in ~/.tmux.conf to wrap our shell launching command.
First, we need to install reattach-to-user-namespace. If you use Homebrew or MacPorts, this is as easy as:
; with Homebrew
$ brew install reattach-to-user-namespace
; with MacPorts
$ port install tmux-pasteboard
I use the same dotfiles for Linux as well as OS X, so I only want to do this in the OS X environment. I accomplish this with the following:
" at the end of the fileif-shell'test "$(uname)" = "Darwin"''source ~/.tmux-osx.conf'
If you only use OS X, you can skip creating an external file, and just put the set-option line directly in your ~/.tmux.conf. Also, I am using zsh, so my command to reattach-to-user-namespace is zsh. If you’re using a different shell, change that to your shell’s name.
With this configuration in place, kill and re-launch tmux. The shells that tmux launches should now get attached to the user namespace, and namespace-related issues should be resolved.
My mother is an accountant. Walk by her office on any given day, and you’ll likely hear the mechanical sounds of an accounting calculator printing its results to a stream of paper.
I used to make fun of the endless crunch-crunch-crunch sound that echoed down the hallways. These days, she tells me, the ol’ hand calculator doesn’t get quite as much use as before. More and more of the accounting business is computerized.
It comes as little surprise. Computers were invented to crunch numbers. When computers became machines that fit on a desktop, the “killer apps” were all about numbers: the first two applications named in Wikipedia’s entry for killer applications are Visicalc and Lotus 1-2-3.
Accordingly, it did not take long for personal computer manufacturers to take inspiration from those hand calculators and add the number pad to the right of the typewriter key layout.
Personal computers, however, have moved well beyond the domain of the office desktop. Indeed, for most people, the computer is no longer thought of as a device for performing calculations. They are used for communication, and for accessing and storing data. I don’t have data to back it up, but I would wager that most computer users don’t punch in long sequences of numbers regularly.
And yet, while the computer has evolved, the number pad remains. Like the wings of a flightless bird. the vestigal number pad sits unused, eating up space on millions of desktops.
Oh sure, you use the number pad, you say. And perhaps you do. But do you really use it enough to dedicate 6 inches of desk width for it? More to the point, does every computer user? People are buying laptops and netbooks for their computing devices more and more, and I don’t ever hear people complaining about how much they miss the numpad.
And yet, the vast majority of keyboards for sale include the numpad. Finding keyboards without them takes some effort.
One of the few I became aware of when starting the search was the Happy Hacking Keyboard Lite.
It’s a nice, small deck. It uses a “UNIX” keyboard layout, like the ones on the old Sun boxes in one of the computer labs back at university.
Apple has come around on the idea of ditching the numpad. New iMacs come with a wireless keyboard that has no numpad.
I considered picking up one of these. And I actually did pick up a couple of Apple’s discontinued wired USB tenkeyless keyboards.
They’re not bad as spare keyboards to have around, but they were not going to be my primary keyboard. (My wife is using one on her desktop machine, though).
One keyboard I really want is the 84-key “Space Saving” version of the IBM Model M.
Sadly, they are awfully hard to come by. I watch for them on clickykeyboards.com but it’s just an endless list of SOLD boards.
But the keyboard that ended my search was the Leopold Tenkeyless Tactile Touch from EliteKeyboards. It combined my desire for a compact no-numpad keyboard with the desire to have a mechanical keyboard.
It’s been a couple of years now since I bought this keyboard, and while the idea of spending $100 on a keyboard was a tough pill to swallow at the time, I would not hesitate to do it again. The compact size make life nicer on my desk, and the action of the mechanical key switches is so much more enjoyable than mashing the rubber dome switches on a non-mechanical keyboard.
I have been running this blog on Wordpress since 2005. Back then, Wordpress was purely a blogging engine.
In the years since then, Wordpress has grown into something more akin to a CMS built around a blogging engine. At work, we have used it as such for a couple of smallstorefronts, built around the blog and the Wordpress e-Commerce shopping cart plugin.
Maintaining a full Wordpress installation for my personal blog, however, had become cumbersome. Particularly so since I am not running any other PHP code for personal projects. At Lone Star Ruby Conference, one of the talks finally convinced me that it was time to leave Wordpress behind, and to go with a static site compiled blog engine. I had previous experience with a static site compiler, nanoc, which we use at work for creating static websites. A more blog-aware tool that works similarly held plenty of appeal to me.
I also no longer wished to run this site on hosting that costs me money. I started with shared hosts like Dreamhost, graduated to a Linode VPS (more for experimenting with VPS hosting than for any actual traffic needs), and most recently ditched the VPS and hosted on NearlyFreeSpeech’s low-cost pay-as-you-use hosting. But for how low traffic the site is, paying even what I give to NearlyFreeSpeech seemed unnecessary. Heroku’s free single web dyno was staring me in the face, offering more than enough hosting power for a static version of my site, for $0.
Introducing Jekyll and Octopress
Octopress is a framework built around the Jekyll blogging engine. It provides various plugins and extensions, as well as a nice default theme, to make blogging on Jekyll a nice out-of-the-box experience.
Jekyll allows users to write blog posts in Markdown and compile them into static HTML pages. Instead of writing posts in a web-based panel, posts are created by adding a new Markdown file in the _posts folder, and writing the post in there using the user’s editor of choice. Finally, I am blogging with Vim.
Octopress provides out-of-the-box support for Disqus commenting, recent Twitter tweets in the sidebar, Google Analytics, and a whole host of other added functionality.
Importing Content from Wordpress
My strategy for importing my Wordpress content into a Jekyll blog reolved around Exitwp. Exitwp will parse a Wordpress export file and generate a Jekyll blog with the same content.
The Exitwp Github page has instructions for installing dependencies on Ubuntu, but on Homebrew on OS X, the commands were:
(Important: make sure /usr/local/share/python is in $PATH.)
Next, I needed to go into my Wordpress admin page and generate a Wordpress export XML file. As of the time of this writing, this is done in Wordpress by logging in to the dashboard as an admin, and going to Tools –> Export.
With the export XML file generated and on my desktop, I set Exitwp to work:
$ python exitwp.py name-of-export-file.xml
One important thing to note: images require some handling. You can make Exitwp download your blog’s images by editing the Exitwp config.yaml file and setting…
… however, this will only download the image files. It will not edit the posts themselves to point to new image locations.
I did not relish the idea of going through all of my old posts and editing each of the image URLs. Instead, what I did was create a wp-content/uploads folder in my Octopress blog’s source/ folder, and copied the contents of wp-content/uploads from my Wordpress blog into there. Since I am hosting the new blog on the same domain, the result is that all of those image files will still be on the same URL. Having a wp-content folder inside my new blog is slightly ugly, but it solves the problem for now, and allows me to gradually move images over and edit image paths on old posts.
Also important to note: comments have to be dealt with separately, too. As a static site has no capacity for comment handling itself, comments on Jekyll/Octopress blogs are handled by Disqus. Fortunately, in my case, I had already moved my Wordpress site to using Disqus commenting. For me, that meant that my comments would carry over to the new site, so long as my post URLs did not change. In my case, this meant making just a small tweak to the config file of the Jekyll blog once it was generated, so that the URL structure would mirror my old Wordpress site’s.
Setting up Octopress
After running Exitwp, I have my old blog exported into a raw Jekyll blog. But now, I have to get that blog into Octopress.
This part confused me for a while. It seems like something everyone else just glossed over.
For starters, I knew I wanted to store this blog in Git. The Octopress instructions would have me clone the Octopress repository, but I don’t want Octopress to be the origin on my blog repo. Instead, I did much like this blog post demonstrates – I made my own blank repository, and I added the Octopress repo as a remote head.
So, now I had Git set up, and I had Octopress checked out locally by virtue of having run git pull octopress master. The part that wasn’t immediately obvious to me was how I was to take my Exitwp-generated Jekyll blog and put that in there.
Exitwp put my generated blog in exitwp/build/jekyll/blog-name. I copied the contents of this folder, and pasted it into octopress/source. Now, my Jekyll content was where it needed to be in Octopress.
To update my remote repo’s copy of the site, I check everything in, and run
$ git push origin master
And whenever I want to pull in the latest updates from Octopress, it’s
$ git pull octopress master
Deploying to Heroku
Adding to the Git setup even more was the fact that I wanted to deploy this to Heroku. For that setup, I basically followed these instructions starting at the “Deploy to Heroku” section. I had never deployed an app to Heroku, but it was very straightforward.
Since Heroku acts as a Git server, I could have skipped the part where I made my own repository host, and just cloned from Heroku whenever I wanted to access the repo on another machine. But I prefer having a copy of the site in my own Git hosting account (on Bitbucket, for the record), and it’s hardly any additional bother. My Bitbucket repo is on “origin”, the Octopress repo is on “octopress”, and Heroku is on “heroku”.
Whenever I want to push updates to deploy to Heroku, I simply do
I haven’t used Octopress for very long yet, but a few thoughts:
Writing posts in Vim and in Markdown syntax has made me realize how much of a drag using the Wordpress post editor was on my blog writing. I write code all day in Vim, and writing my blog posts there too is much less of a context switch. Flicking back and forth between Vim buffers is a lot less of a hassle. It makes offline blogging a lot easier, too.
I never found a code formatting plugin for Wordpress that I did not hate. Octopress comes with code formatting styling out-of-the-box and it works very well. I’m not a huge fan of the Solarized theme it uses by default, and I may see about changing that in the future. But the important thing is that it works.
Not having to worry about Wordpress updates is a big relief. I can’t say that I stayed on top of updates nearly as much as I needed to. And I don’t have to worry about database backups, either. There’s a “weight off my shoulders” feeling with making this move.
There’s something comforting about having my entire blog history as a series of Markdown files, instead of posts locked away in a Wordpress database table in MySQL.
There are a lot of neat Octopress plugins that I haven’t really delved into yet. But the default out-of-the-box experience is pretty much awesome. Even if, for now, my blog looks just like a bunch of other Octopress blogs.
Tip: Use IFTTT to tweet new posts
One of the plugins I used with Wordpress would add tweets to my Twitter feed, informing followers of new posts to the blog.
Without the server-side component, Octopress lacks this ability. However, thanks to the fact that Octopress generates an RSS feed file, we can use an external service to accomplish the same thing.
IFTTT is a service that allows you to write “triggers” that perform various actions. In this case, I have IFTTT watching my blog’s RSS feed, and whenever it detects a new feed item, it makes a post to my Twitter, as well as one to my Facebook wall.
Tip: Use Pow on OS X for easy testing
By default, users can run rake preview to make Octopress spin up a web server at http://0.0.0.0:4000 and listen for changes to files to automatically rebuild the site for easy previewing.
This process can be made a little nicer with Pow, a handy little Rack webserver for OS X.
Just add a symlink your site folder in to ~/.pow/, and your system will run that site, and configure it to be reachable at http://symlink-name.dev. Then, run rake watch to make Octopress listen for changes and rebuild pages.
Gotcha: Drafts aren’t imported by Exitwp
I had accumulated many half-written posts in my Wordpress install over the years. Posts that I totally intend to finish.
Exitwp did not import these (or, more likely, the Wordpress export functionality did not include them in the export. I’m not sure which it is.)
I ended up fetching these manually.
Gotcha: zsh and square bracket commands
From that point on, I just followed the Octopress documentation to get up and running. I did, however, run into an annoying issue.
Octopress command-line commands often use square brackets, such as:
$ rake new_post["My new post's title"]
Run it in zsh, though, and you get:
$ rake new_post["My new post's title"]
zsh: no matches found: new_post[My new post's title]
The problem is that square brackets are a glob operator in zsh. This blog post pointed me in the right direction. The “solution” is to escape the square bracket characters.
$ rake new_post\["My new post's title"\]
Alternately, zsh users can disable zsh’s GLOB option. From the Octopress Github issue on this problem, though, it sounds like some tweaks will be added to Octopress to address the issue.
Thankfully, jailbreaking iOS users can now add Nitro support to Chrome (and other apps) with the new Nitrous app on the Cydia store.
Nitrous adds a menu to the iOS Settings app, which allows users to selectively flag applications to use Nitro, thus allowing that app’s web views to perform as they do in mobile Safari.
Enabling Nitro on non-Safari web browsers is great enough, but it also allows enabling Nitro on other apps that make use of UIWebViews, such as client apps for Reddit, Twitter, and Facebook.
While it’s not quite the same thing as Chrome being able to include V8 in iOS Chrome, the ability to use Nitro in Chrome takes away one of the two major disadvantages Chrome has on iOS. (The other disadvantage – not being the default browser – can be solved with the Browser Chooser app, allowing users to elevate Chrome to the iOS default browser).
1993 is considered to be the starting point of the commercial Internet. For those of us living in little farm towns in the San Joaquin Valley, however, it would not be until 1995 that the Internet came into our lives.
Weeks before the big opening of the local unlimited-use dial-up ISP, the local newspaper ran an article about the Internet and listed some websites to check out. Among the list was Infoseek, one of the first major search engines. There were a small army of search engines in those days that vied for attention: AltaVista, Excite, Lycos, WebCrawler, and of course, Yahoo. For me, though, Infoseek was my home on the Internet for the next 3 years.
As detailed on Wikipedia’s brief History section for Infoseek, the site peaked in 1997, was acquired by Disney in 1998 and merged with other Disney online properties to form Go.com, and by mid-1999, ceased to exist as its own site.
I remember the day Infoseek.com began redirecting to Go.com. The lean search engine pictured above was gone, and a late ‘90s “portal” site was in its place. By that time, however, the folks on ZDTV (the original, superior version of TechTV) had long been cluing people in on a new search engine called Google.
Kids, this was what Google looked like when your parents started using it. At least if your parents were late ‘90s geeks.
Google has been my search engine since the day Infoseek kicked over to Go.com. So, that’s almost 12 years now.
As of the past few months, however, I have been experimenting with another search engine. I’m not talking about Bing. I mean DuckDuckGo.
DuckDuckGo has many things going for it, to differentiate itself from Google.
The first is that DDG takes user privacy very seriously. Indeed, this is probably the main thing they use to separate themselves from Google, as evidenced by the donttrack.us website and the Google-slapping billboard advertising the site:
DDG sticks it to Google with this San Francisco billboard and the donttrack.us website.
donttrack.us explains DDG’s privacy protection better than I can, so I won’t try to re-summarize it here. While I am not a privacy zealot, I do place some degree of value on increased privacy. I use ad-blockers, script-blockers, and other such privacy protecting browser extensions. I do consider DDG’s privacy handling an asset, although I am not going to freeze out Google or stop using Google’s non-search services (of which I use many) over their tracking. When it comes down to it, I will choose functionality over privacy protection, but I will make an effort to try and get both.
Privacy is about the only thing I ever hear brought up as to why one should use DDG. However, there is one feature that I think is a bigger deal: DDG’s !bang syntax.
If you type a search query into DDG and include an exclamation point (commonly called a “bang”) along with a name/code known to DDG, it sends your search to that site’s search function, instead of searching DDG itself.
For example, if you enter “pink floyd !g” into DuckDuckGo…
… you’ll be sent to the Google search results page for “pink floyd”:
Now I can hear you thinking, “why would you go to DDG to type a search meant for Google, instead of just going to Google in the first place?”. The answer is, I don’t actually go to a website to do my searching. Browsers like Chrome and Firefox have the ability to type a search into the browser directly (in the URL bar in Chrome and in Firefox if you use Foobar or Omnibar.
By setting my browser’s default search engine to DDG, I have direct access to many search engines from the browser bar, by using the !bang syntax.
I search the documentation of things like Rails, jQuery, etc. with !rails, !jquery, etc.
I can search Amazon instantly with !amazon or even the shorter version, !a.
Searching Reddit is !reddit or !r. HackerNews is !hackernews or !hn.
StackOverflow, ServerFault, and SuperUser are !so, !sf, and !su (or !stackoverflow, !serverfault, and !superuser).
Google search is !g or !google, but the various sub-searches are available too. !gn for Google News search. !gi for Google Images.
It is true that one can set up their browser to have search triggers like this, thus removing the need to funnel the search traffic through DDG. But using DDG means all of these search triggers are preconfigured. All I do is point my browser’s search bar to DDG.
Best of all, the !bang options are so intuitive that I never end up looking any of them up. I just try it out and almost 100% of the time, what I think the trigger would be is exactly what it is.
Now, the truth is, DuckDuckGo can’t go toe-to-toe with Google in terms of pure search result quality. A sizeable portion of my searches end up getting appended with the !g bang and sent to Google. That said, the effort required to still use Google in this way is minimal, and the benefit gained from having all the other !bang operators at my fingertips is well worth it. DDG’s results continue to get better, though, and I prefer having my searches go to DDG by default, and only selectively send some to Google.
To top it off, there are also a lot of other little goodies built into DDG. I particularly like the tech goodies. I often enter “random password” into DDG to get a quick and easy 8-character random password.
Most of all, though, it’s the !bang syntax that has made DuckDuckGo stick for me. It has taken me some time to get into the habit of using certain !bang searches, but they’re always time savers once I get into that habit.
The problem was due to building the plugin using a different version of Ruby than the one that Gvim was built with. For me, this is because I was using the Gvim that comes with my Linux distribution (Ubuntu), but not the Ruby that comes included in the distro. Instead, I am running RVM and defaulting to a more current Ruby interpreter in my Bash environment.
The answer, for me, was to switch to the “system” Ruby in RVM, and then rebuild the plugin:
I also had this happen on my Mac laptop. Same basic situation: mismatch between the version of Ruby that my copy of MacVim was built with, and the version I was using in RVM. In that case, I believe I installed a new Ruby with RVM, one that matched the version that the particular MacVim build was built with.
To find out which version of Ruby that Vim/Gvim/MacVim was built with, use the command:
:ruby p RUBY_VERSION
Then, if you don’t already have that version of Ruby in your OS, you can build a matching one from within RVM, and use that one to build the plugin.
I have started playing a game called Day Z (a mod for the game ArmA 2). I will write about this at a later time. One thing I will mention, though, is that ArmA 2 is a very CPU-demanding game. As a result, I found myself wanting to get some extra performance out of my system’s processor. I decided that I would overclock it, as I have done on many of my systems before.
One thing to understand about CPU production is that Intel and AMD don’t design completely separate cores for each CPU product they sell. Rather, they’ll make one CPU core and clock it at different speeds to make a range of products. What this means to overclockers is that it’s often trivially easy to buy a CPU from the low-end of that range, and overclock it to run at the speed of the high-end.
To be honest, this barely qualifies for “overclocking”. It’s more like removing an artificial restriction that makes the CPU run slower than the higher speeds of which it is capable. “Real” overclockers like to take CPUs and see just how far they can push those cores, way beyond the normal range that Intel or AMD are willing to clock those chips for wide production. Overclockers utilize more aggressive methods of keeping the chip cool, increase voltage beyond the stock levels, and other such tweaks to push a core to its maximum stable (or, sometimes, not-so-stable) limits.
This is not the level of overclocking that I participate in. I am simply interested in getting the low-hanging fruit. Taking a chip clocked at the low-end of the range of CPUs using the same core, and turning it up to the high-end, is a very easy way to spend less on a CPU and end up with the same level of performance. Sometimes, depending on how much cushion is left, a chip can go even higher relatively painlessly. (Intel/AMD don’t necessarily always max out a core’s capable range. Sometimes they simply move on to the next core.) When buying CPUs, I tend to buy on the low-end of the core’s range, knowing that I will probably have an easy overclock ahead of me if I need more CPU power.
My current system runs an AMD Phenom II x6 hex-core CPU. It’s the 1055T, giving it a stock clock of 2.8 GHz. The other chips that use the “Thuban” core that this one does run up to 3.3 GHz. What’s more, overclockers found it trivial to clock them up much higher – AMD definitely did not release any CPUs that reack the peak range of this core. So, I bought a nice big heatsink/fan combo (Cooler Master Hyper 212 Plus), replaced the crappy stock AMD heatsink, and clocked the chip up to 3.5 GHz. It’s a nice 700 MHz boost, yet overall it is still a very conservative overclock. With the new heatsink, my CPU temperatures are chilly cool (a good 15 Celsius cooler than they were at 2.8 GHz with the stock AMD fan), so I know heat is no factor. I hear 3.8 GHz is reasonably achievable, though with some slight voltage bumps. Nothing I’d be afraid of, but at this point, I start to reach diminishing returns. I have successfully picked the low-hanging fruit.
If you’re using rbenv and ruby-build on a Linux that has updated to glibc 2.14 or newer, you may have encountered an error like this when attempting to build an older version of MRI:
$ rbenv install 1.8.7-p357 rbenv: 1.9.3-p194
Inspect or clean up the working tree at /tmp/ruby-build.20120508145707.21228
Results logged to /tmp/ruby-build.20120508145707.21228.log
Last 10 log lines:
callback.func:79:24: error: ‘proc’ undeclared here (not in a function)
callback.func:79:39: error: ‘argc’ undeclared here (not in a function)
callback.func:79:45: error: ‘argv’ undeclared here (not in a function)
callback.func:82:1: error: expected identifier or ‘(’ before ‘}’ token
dl.c:106:1: error: expected ‘;’, ‘,’ or ‘)’ before ‘static’
cp ../.././ext/dl/lib/dl/import.rb ../../.ext/common/dl
make: *** [dl.o] Error 1
make: *** Waiting for unfinished jobs....
make: Leaving directory `/tmp/ruby-build.20120508145707.21228/ruby-1.8.7-p357/ext/dl'
make: *** [all] Error 1