Tenerife Skunkworks

Trading and technology

Transparency and Masking in Lispworks CAPI

CAPI is a good cross-platform GUI toolkit and has been used to write apps like Prime Trader. Your apps will have a native look on each platform and you won’t have to do anything special to make it happen. Still, bits of platform speific code are still required and one example is alpha-blending.

There’s no support for alpha-blending in CAPI. You can do masking but specifying a transparent color in your images but this is way too basic to render a poker room so I have to resort to platform-specific code to make it happen.

The poker room is a composite image where most of the elements, save for the carpet, have a greyscale mask in a separate file. A greyscale (0-255) mask lets you specify translucency and simulate shading. This is how the shade cast by the table and chairs is done. The carpet is drawn first, two to ten chairs go next, the table is sandwiched on top and finally the cards, chips, buttons, etc. are drawn.

I should be able to make the poker room a pinboard layout, set the composite poker room image as the background and make dynamic objects such as cards, chips and buttons into pinboard objects.

Alpha-blending is key to this project and is surprisingly easy to simulate even for images formats that do not support an alpha channel. I’m storing most of the images as JPEGs with some BMPs mixed in and the format for each picture is chosen to save space while preserving image quality. A carpet is just 360K as a JPEG, for example, while over 1.1Mb when stored as a PNG without the alpha channel.

The formula to use for alpha-compositing is

displayColor = sourceColor * alpha / 255 + backgroundColor * (255 - alpha) / 255

which requires you to retrieve the values of each pixel in 3 images and store the target pixel back after multiplication. LispWorks color components are float values from 0 to 1 so the formula will look slightly different.

The kosher way to accomplish the blending with LispWorks is to load the images, create image access handles, load the pixel data into the image access structures and then retrieve pixels in a loop. Pixel values need to be converted into a color spec before individual color components can be retrieved. This is the naive way of doing it:

This is also a very slow way of doing it as it takes 11-12 seconds to blend three 794x538 JPEG images (carpet, table, mask) on my Powerbook G4 1.25Gz.

I did some digging around at the beginning of the year to be able to load image data into OpenGL textures. I’m not using OpenGL this time but I need to assign the bitmap data back to the image after modifying it.

Getting the Cocoa image handle is done by calling (image-ns-image image), assuming that image was the result of (gp:load-image …). Bitmap data can be retrieved like this:

and blending is just an optimized array loop:

There’s one last bit that needs to be done before the image data is updated. It took me a whole evening of poking around and googling before I finally found my answer.

TIFFRepresentation returns a copy of the bitmap data and the easiest way to update the image is to remove the returned representation and to add it right back with code like this:

Voila! It takes less than a second to blend three large images together. Problem solved. I will still need to poke around to get appropriate code under Windows and Linux.