Category: programming

Another Emacs Project Library

I've posted a new project handling library to the Emacs Wiki: mk-project.el. A "project" in this sense is a directory of source files. You define a named project and provide settings for it: base directory, source file patterns, file patterns to ignore, the TAGS file, the compile command, etc.

Features:

  • Quickly switch between projects, optionally closing the files in the old project.
  • Use the new project's TAGS file -- and be able to rebuild the TAGS file based on project settings.
  • Run find-grep from the new project's base directory, ignoring certain files or directories based on project settings.
  • Run compile with the project's preferred compile command.
  • Open any file in the project quickly based on regex matches.
  • Quickly open dired on the project's base directory.
  • Define per-project startup and shutdown hooks -- useful for opening often-used files.

Perhaps the feature I'm happiest with is project-find-file. The library maintains a list of all the files under the project's base directory in a special buffer, *file-list*. Project-find-file ask for a regular expression representing part of a file's path or name and either 1) opens the file if there is only one match in *file-list*, or 2) allows selecting among the matching files with Emacs' built-in completion mechanism. Also, it sometimes comes in handy to search buffer *file-list* directly when you want an overview of the entire project. I often work with projects having thousands of files in deep folder hierarchies so project-find-file is very convenient.

After writing my library, I discovered a similar library, ProjMan by David Shilvock. The project operations we offer are similar - we even recommend similar keybindings! ProjMan is perhaps more complete, but I don't feel my time was wasted writing mk-project.el -- elisp is pleasure to code in.

Technorati tags for this post:

Lisp Advocacy at its Best

Kenny says, Go Write Something in Lisp!

Technorati tags for this post:

Git Backstage, Under the Covers and on the Down Low

Don't tell my employer, but I'm an undercover git user. Officially, my company uses Clearcase UCM - but I do almost all my coding outside Clearcase. I use git behind the official version control system. I'll tell you why I risk a scolding from my boss1 and the IT department for using an unsanctioned tool. But to do that, I'll have to tell you a bit more about both Clearcase2 and git.

Clearcase Basics

Clearcase is a centrally managed, client-server type system -- a big server hosts the repository which tracks all the project's info: individual files, the directory tree, versions, branches, users, permissions, Social Security numbers, DNA sequences, etc. The client (that's you) gets a working copy of the project files and tools to interact with the server. Checking out, checking in and viewing history all require connecting to the server. You can't do a thing without being tethered to the central repository.

In the Clearcase UCM development model, programmers create new "streams" ("branches" in generic terms), each having an "activity" (which is Clearcase talk for "changeset"). When you checkin a file, it's recorded against an activity. Therefore, an activity is a set of checked-in files (and directory changes). Other developers on the project won't see your changes until you "deliver" (merge) your activity (changeset) to the parent stream (the main branch). End of vocabulary lesson.

Where Clearcase Falls Down

My biggest issue with the Clearcase model is that it inhibits both experimental and iterative programming. I assert the following:

  1. Experimental coding requires branching. Let's hope this isn't a controversial statement. The only arguments people make against branching are practical - their VCSs don't support it well. Or they don't support merging well. Clearly, if you had a VCS that could branch and merge well, you'd use it all the time.
  2. Clearcase discourages experimental branching as streams are centralized, limited and public. On my project, creating "unnecessary" experimental streams is discouraged as each stream incurs the maintenance penalty of using more resources on the server.
  3. Merging in Clearcase is also less than perfect. Instead of a patch-based approach, Clearcase restricts merges to streams that have a common baseline. And guess who has to create, maintain and recommend these baselines? You, the user. Want to merge between streams that have diverged - that don't share an exact baseline? Forget it: you have to go outside Clearcase to do it.
  4. Finally, Clearcase activities don't allow logical commits within an activity - a checkin is limited to just one file. You cannot checkin 4 files as a group representing a logical change. This makes it hard to both organize your work and revert multi-file changes if they don't work out.

Let's see if git could help. (Ok, it can - or else why would I continue writing?)

Git Comes (Quietly) to the Rescue

Git is the distributed VCS used by Linus to maintain the Linux kernel. Where Clearcase requires a centralized server, git requires none -- the repository is stored in a single hidden directory at the top of your project's file tree. You can create a git repository from an existing project very easily:

cp -R /my/clearcase/project /my/git/project
cd /my/git/project
git init
git add .
git commit

A git repository is local and under your complete control, you don't need any anybody's permission to create one. And obviously you don't need network connectivity to a centralized server. You won't need to file a single TPS report to setup a git repository. And, if needed, you can keep your repo a secret.

What was I complaining about? Oh yes, branching and logical commits.

Branching is simple and lightweight in git. Let's say you're working in your 'bug_fix' branch, and you decide that the fix could be simplified by moving some methods from class Foo to class Bar. Here's how you'd create a new branch called 'foo_refactoring', make some changes in it, and then merge those changes back to the master branch:

cd /my/git/project
git checkout bug_fix
git checkout -b foo_refactoring
// make some changes to Foo and Bar, compile, and test
git add Foo.java Bar.java
git commit -m 'Refactored Foo'
git checkout bug_fix
git merge foo_refactoring

If you decided the refactoring was unnecessary, you could have skipped the merge -- or even permanently removed the experimental branch. The branch was created quickly -- just to try out some ideas -- and it can be ignored or removed. Branching is up to you, not the sysadmin.

Finally, git lets you build logical changesets. As you can see in the foo_refactoring example, a commit can contain any number of file changes. You can build a new feature piece-by-piece, committing chunks of related work together. This is good for both you and your reviewers!

So, how are you going to use git behind your VCS?

Getting it to Git

Getting your Clearcase (or CVS, Subversion, Perforce, etc) code into git is easy: copy your working dir to some local drive space and do the "git init; git add .; git commit" sequence.

Dealing with rebases is easy too. Other users have probably made changes to the codebase (in Clearcase) and you'll need to merge your work with theirs before merging to the main stream. You can handle this by rsyncing the upstream code into a git branch, then using git rebase to merge that code into your development branch - fixing any conflicts as necessary. Git rebase basically pops your current commits off your branch, merges with the requested branch, then re-applies your commits. It helps keep the history of your changes simple. I recommending doing all development work in a sub-branch off master (master is git's default branch) and keeping master for rebases. For example:

cd /my/clearcase/project
cleartool rebase -recommended // or 'cvs up' or whatever
cd /my/git/project
git checkout master
rsync -r /my/clearcase/project /my/git/project
git commit -a -m 'rebased from clearcase'
git checkout dev_branch
git rebase master

Getting it back to Clearcase

Git has made my daily coding much nicer - but if I want my code built into my product, I still have to get it back to Clearcase.

I find the simplest and safest way to do this is by applying a series of patches to the Clearcase controlled working dir. You can ask git to generate a patch for a single commit using git diff:

git diff 345983 > my-change.patch

But if you give git-diff a branch name instead of a commit, it will generate patch files for each diverging commit between the two branches. Assuming your master branch represents the rebased Clearcase branch and your dev branch has been 'git rebased' to master, this is exactly what you need!

cd /my/project/git
git checkout bug_fix
git diff master > bug_fix.patch
cd /my/clearcase/project
// checkout any files if necessary
patch -p2 < /my/git/project/bug_fix.patch

Build, test and submit in Clearcase - you're done!

Final Thoughts

Having a full understanding of your VCS's data model is essential to using a it correctly. Perhaps the root cause of why I prefer git over almost any other system is its simple conceptual model (Git for Computer Scientists does a nice job explaining the data model). Clearcase is typical enterprise software -- its feature sheet is very long and highlights words that CIOs love like "reliable", "maintainable", and "support contract", but the documentation is thrifty when discussing the systems internals. Version control is too essential and too difficult to trust to a system you don't understand. So I use git - behind the scenes if necessary.

Update (2008/8/5): As several commentators have noticed, if you can use Clearcase snapshot views instead of dynamic views, then you can directly init a git repo in the view storage directory. Then you can use git directly out of that directory or clone it. By creating the git repo directly in your snapshot view, you can substitute the rsync step with a "git pull" and the patch step with a "git push". This is a big win. Alas, my employer requires dynamic views due to tool limitations, so I didn't discuss this method in the original entry.


1 Hi Boss! What I'm discussing here is not really any less safe than having un-committed work in any working dir (which everybody does). But the threat of a knuckle slapping adds some drama to this post - don't you think? ;-)

2 Almost everything discussed in this post is true of any centralized version control system (CVS, Subversion, etc), not just Clearcase.

Technorati tags for this post:

Programmers Can't Program?

Jeff Atwood, at Coding Horror is flabbergasted by the number of programmers who can't program. The post claims that a significant number of programming interviewees struggle to write tiny programs like:

.... a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

Can this really be true? Where do these "programmers" come from, I wonder?

There's a related quote in Dreaming in Code where Joel Spolsky, commenting on formal software methodologies, says,

Anyway, the majority of developers don't read books about software development, they don't even read Slashdot. So they are never going to get this, no matter how much we keep writing about it.

Here we have 2 sources of anecdotal evidence suggesting that there are a large number of "programmers" who just don't care about programming. I suppose that any career that pays well attracts people who are more interested in the paycheck than the work, but programming is such a demanding task that I had assumed most practitioners gave a crap! I'm saddened to hear otherwise.

Link to story

Technorati tags for this post:

A Confession

I have a confession.

I've posted a few things on this site about vim. I like vim, a lot. But lately I've..., well I've strayed. I've been seeing another editor. You can probably guess who. That's right, emacs.

I've never been especially partisan in the editor wars. As long as you use a "real" editor, a programmable editor, I won't heckle you. But why emacs, why now? I've been learning lisp. And learning lisp is pretty much the same thing as learning emacs. I tried it in vim for about a day. Then I developed my wandering eye.

So, sorry vim purists. I still use, love and advocate "our" editor. But I have to reserve the right to pick the best tool for the job, and occasionally it will be emacs.

Technorati tags for this post:

Dreaming in Code

I recently picked up Dreaming in Code which chronicles the Chandler project while investigating the general difficulties of building software on time and under budget. I'll give the book an enthusiastic two thumbs up, with the caveat that the intended audience is the lay public, not those of us who write software daily.

My favorite chapter, Engineers and Artists, opens thusly:

From a panel of experts assembled at a conference on software engineering, here are some comments:

"We undoubtedly produce software by backward technologies."

"Particularly alarming is the seemingly unavoidable fallibility of large software, since a malfunction in an advanced hardware-software system can be a matter of life and death."

"We build systems like the Wright brothers built airplanes -- build the whole thing, push it off a cliff, let it crash, and start over again."

"Production of large software has become a scare item for management. By reputation, it is often an unprofitable morass, costly and unending."

"The problems of scale would not be so frightening if we could at least place limits beforehand on the effort and cost required to complete a software task... There is no theory which enables us to calculate limits on the size, performance, or complexity of software. There is, in many instances, no way event to specify in a logically tight way what the software product is supposed to do or how it is supposed to do it."

...

"Some people opine that any software system that cannot be completed by some four or five people within a year can never be completed."

I nodded along in agreement with each quotation. The author goes on to explain that the conference that produced the words above took place in, ... wait for it... 1968. The event was organized by NATO to address, in their words, "The Software Crisis".

Depending upon how your coding went today, you'll either be heartened by that, or depressed at the state of the field.

I take back my caveat, the book will entertain and provide solice to professional software people. And it should be required reading for anybody thinking about getting into software.

In case you didn't know it already, Rosenberg reminds us, software is hard. Yes it is, yes it is.

Technorati tags for this post:

Re: Classical learning curves for common editors

While I'm on my Vim theme - here's a set of graphs showing learning curves for common editors.

Hilarious!

Technorati tags for this post:

Multi-file Editing in Vim 7.0

Vim supports editing many files at once. That's a great feature for programmers like me. I can copy and paste text between files, search for identifiers in other files, and view 2 (or more) files on one screen. Vim provides copious commands to list, add, delete, cycle through, and split buffers. And then Vim 7.0 comes along and adds tabs to the mix.

Help! I'm drowning in options.

I find myself wishing for a simpler model for multi-file editing. Maybe a tab-per-buffer model where you could glance at the tab names to know which buffers are open. But why give up splits? I use :stselect <ident> to follow tag references all the time. All I really want to do is know what files I have open, and move between them without wrecking my tab and window layout.

To help untangle these options, let me summarize vim's data model - the primary elements are buffers, windows and tabs.

Buffers

A buffer is a file that has been loaded into memory for editing. A buffer is modified during your editing session but isn't saved to disk until you do :w. New buffers are created with :e <filename>.

The :ls command lists all my buffers. I often rotate through buffers with :bn (buffer next). You can change to a specific buffer with :b<buffer number>, but I find that tedious, as you first have to use :ls to find the buffer number. Plugins like bufexplorer and minibufexpl can make switching buffers more intuitive.

Windows

A window is nothing more than a view of a buffer. Multiple windows can view one buffer so edits in one window are displayed in all windows on that buffer - try :split and make some edits. But more commonly, you use windows to view different buffers at the same time

I tend to use windows ephemerally - I'll use a split command to quickly reference another file and then close the new window. Programmers like to be able to view lots of text at once, and windows cut down on viewable lines.

I also find the stock commands to flip between windows cumbersome so I have mapped <leader>w to Ctrl-W w, which lets me toggle through the window list.

Tabs: new for Vim 7.0!

A tab is a full-screen collection of windows. Tabs let you partition your windows into any scheme you like. A 'tabline' at the top of the vim window lets you know which tab you are in. :gt moves you to the next tab. Sometimes I prefer one tab per buffer (try :tab sball), or you could have a C header file and its implementation per tab.

Frustrations and switchbuf to the Rescue

All this brings me to my problem - I have too many ways to move between buffers. The :ls command lists all the buffers and their buffer numbers; :b<number> replaces the contents of the current window with the requested buffer; :gt moves to the next tab; and I configured <leader>w to move to the next window (within a tab). That's a lot of options.

Generally, I want a way to move between 'files' without disturbing my current window and tab layout. As I finally learned, a combination of :sb and switchbuf makes this possible.

The :sb <buffer> (split buffer) command usually opens a new window with the requested buffer. You can give the command a buffer number or a filename. With the set switchbuf=usetab option, you can tell vim to first search the tab and window list for the requested buffer - if the buffer exists is in a window or tab, the focus goes right to that window. The :sbn (split buffer next), when used with switchbuf, lets me flip through the list of my open files.

Problem solved!

Technorati tags for this post:

Vim 7.0 Is Here

Vim 7.0 has been released and I'm ever so pleased. I've been waiting for the tabs feature for a long time. Here's the release announcement.

Yes, I'm serious. This is the sort of thing I get exiced about.

Technorati tags for this post:

How to fix Subversion errors after upgrading your Berkeley DB library

After a routine "apt-get upgrade" of Debian testing, I found myself unable to use my Subversion repository. I got an error message when trying to commit a file:

svn: Berkeley DB error while opening environment for filesystem db:
DB_VERSION_MISMATCH: Database environment version mismatch
svn: bdb: Program version 4.3 doesn't match environment version

A note from the Subversion FAQ had this to say:

After upgrading to Berkeley DB 4.3, I'm seeing repository errors.

Normally one can simply run svnadmin recover to upgrade a Berkeley DB repository in-place. However, due to a bug in the way this command invokes the db_recover() API, this won't work correctly when upgrading from BDB 4.0/4.1/4.2 to BDB 4.3.

Use this procedure to upgrade your repository in-place to BDB 4.3:

  • Make sure no process is accessing the repository (stop Apache, svnserve, restrict access via file://, svnlook, svnadmin, etc.)
  • Using an older svnadmin binary (that is, linked to an older BerkeleyDB):
    1. Recover the repository: 'svnadmin recover /path/to/repository'
    2. Make a backup of the repository.
    3. Delete all unused log files. You can see them by running 'svnadmin list-unused-dblogs /path/to/repeository'
    4. Delete the shared-memory files. These are files in the repository's db/ directory, of the form __db.00*

The repository is now usable by Berkeley DB 4.3.

As the instructions note, you need a copy of subversion linked with a pre-4.3 version of the Berkeley database library. Subversion uses Berkeley via the APR Library. So we need to install appropriate verions of Berkeley, APR and Subversion.

My notes are below. Note that I installed the APR and Subversion software into a local directory (/home/mk/proj/svn_db/local in my case). Also, my Subversion repository is in /data/svnroot.

# export LD_CONFIG_PATH=/home/mk/proj/svn_db/local

# wget 'http://downloads.sleepycat.com/db-4.2.52.tar.gz'
# tar -xvzf db-4.2.52.tar.gz
# cd db-4.2.52
# cd build_unix
# ../dist/configure
# make
# make install

# wget 'http://archive.apache.org/dist/apr/apr-0.9.5.tar.gz'
# tar -xvzf apr-0.9.5.tar.gz
# cd apr-0.9.5
# ./configure --prefix=/home/mk/proj/svn_db/local
# make
# make install

# wget 'http://archive.apache.org/dist/apr/apr-util-0.9.5.tar.gz'
# tar -xvzf apr-util-0.9.5.tar.gz
# cd apr-util-0.9.5
# ./configure --prefix=/home/mk/proj/svn_db/local --with-apr=/home/mk/proj/svn_db/local --with-berkeley-db=/usr/local/BerkeleyDB.4.2/

# make
# make install

# wget 'http://subversion.tigris.org/downloads/subversion-1.2.3.tar.bz2'
# tar -xvjf subversion-1.2.3.tar.bz2
# cd subversion-1.2.3
# ./configure --prefix=/home/mk/proj/svn_db/local --with-apr=/home/mk/proj/svn_db/local --with-berkeley-db=/usr/local/BerkeleyDB.4.2/
# make
# make install

# su
# /home/mk/proj/svn_db/local/bin/svnadmin recover /data/svnroot
# tar -cvf ~/svnroot_backup.tar /data/svnroot

Then I executed steps 3 and 4 from the FAQ. At this point, I was able to commit files to my repository again.

Link to story

Technorati tags for this post:

< Future 10 | Past 10 >