Browsed by
Category: Software Deveolopment

Unknown Armies Online?

Unknown Armies Online?

With all the (hopefully) useful information on this blog do you know what the most viewed article is? My Unknown Armies Online post. By far. I am talking by over 100% the views of my second place post. For years now.

It was just a thought. Just half a thought, really. Nothing was ever supposed to come of it unless I found the time. Perhaps there is a demand for it?

Convert an OpenCV 2 Image to an Allegro 5 Image In C/C++

Convert an OpenCV 2 Image to an Allegro 5 Image In C/C++

Just a quick sample for converting an OpenCV 2 image (Mat) to an Allegro 5 image (ALLEGRO_BITMAP).

First we need to setup some things and have places to store some stuff:

#include <allegro5/allegro.h>
#include <allegro5/allegro_image.h>
#include <cv.h>
#include <highgui.h>

cv::VideoCapture video([device number/filename]);
cv::Mat frame;
ALLEGRO_BITMAP *image = al_create_bitmap([width], [height]);

Next the guts:

video >> frame;
if ( !frame.empty() ) {
	al_set_target_bitmap(image);
	al_lock_bitmap(image, ALLEGRO_PIXEL_FORMAT_ANY, ALLEGRO_LOCK_WRITEONLY);
	for ( int y = 0; y < [height]; y++ ) {
		for ( int x = 0; x < [width]; x++ ) {
			cv::Vec3b &pixel = frame.at(y, x);
			al_put_pixel(x, y, al_map_rgb(pixel[2], pixel[1], pixel[0]));
		}
	}
	al_unlock_bitmap(image);
}

A few notes:

  • OpenCV 2 does not often work in RGB unless you make it. It is typically the reverse, BGR. Unless you have a specific need I see no reason not to do the conversion on-the-fly as above.
  • This sample assumes everything is the same width, height, color depth, ect, so watch out for that. Allegro, in particular, may slow to a crawl if you do not watch your conversions.
  • I am very not happy with the performance of this so it does need some work in that respect. It does, however, work very well otherwise. My goal is to get my Atom-based netbook running this smoothly. The Raspberry Pi may be a pipe dream but I am going to try.
  • This was tested in Linux with hardware I know what to expect out of. If there is any chance your webcam/video/whatever may return something other than a 24-bit (uint8, uint8, uint8) BGR color space you will need to account for that. Both OpenCV and Allegro have a number of functions/macros for that kind of thing.

This is mostly for my own notes but I figured someone else might also be interested. None of this is meant to be complete but, if you are struggling like I was, this should be all you need to pass that hurdle. Give a man a fish… alright, back to my cold, week-old “chinese” food and root beer.

Update 2012.11.28
After some more experimentation (and a nudge in the right direction from Peter Wang) I have tweaked the guts and it now runs much, much faster:

video >> frame;
if ( !frame.empty() ) {
	ALLEGRO_LOCKED_REGION *region = al_lock_bitmap(image, ALLEGRO_PIXEL_FORMAT_ANY, ALLEGRO_LOCK_WRITEONLY);
	for ( int y = 0; y < [height]; y++ ) {
		for ( int x = 0; x < [width]; x++ ) {
			uint32_t *ptr32 = (uint32_t *)region->data + x + y * (region->pitch / 4);
			*ptr32 = (frame.data[y * webcam_width * 3 + x * 3 + 0] << 16) | (frame.data[y * webcam_width * 3 + x * 3 + 1] << 8) | (frame.data[y * webcam_width * 3 + x * 3 + 2] << 0);
		}
	}
	al_unlock_bitmap(image);
}

Compile Allegro 5.0.x on Linux Mint and Ubuntu

Compile Allegro 5.0.x on Linux Mint and Ubuntu

As a sister article to my Cross Compile Allegro 5 Programs in Linux for Windows post, here are the steps I took to get Allegro 5 installed on Linux Mint 13, Linux Mint 14, and Ubuntu 12.10:

  1. Download and extract the latest .tar.gz-compressed source.
  2. Install the required packages: sudo apt-get install -y cmake g++ freeglut3-dev libxcursor-dev libpng12-dev libjpeg-dev libfreetype6-dev libgtk2.0-dev libasound2-dev libpulse-dev libopenal-dev libflac-dev libdumb1-dev libvorbis-dev libphysfs-dev
    • [Note] Would be a good idea to do a sudo apt-get update first.
  3. Create a workspace: mkdir "build" && cd "build/"
  4. Create make files: cmake "../"
    • [Note] By default cmake will want to configure make for a release shared build. If you want a debug build you will need -DCMAKE_BUILD_TYPE=Debug or -DCMAKE_BUILD_TYPE=Profile for a profiling build.
  5. Compile: make
    • [Optional] By default make will not eat up all the processing power it can. Add -j# to change this behavior, where # is the number of job you would like to have running in parallel. If you machine is more or less idle the number of processors available should not hurt anything. If you are using your machine you might want to some half that number instead.
  6. Install to respective paths: sudo make install && sudo ldconfig
    • [Optional] Recommended if you are unsure as to why this step is optional.

If you want to compile an Allegro 5 C++ application– assuming you completed all the steps above and have g++ installed– you can run g++ [source file(s)] -o [output] `pkg-config --libs allegro-5.0`. There are, of course, many more Allegro 5 add-ons (check out pkg-config --list-all | grep allegro) but I will leave using those up to you to discover on your own.

As of this writing Allegro 5 v5.0.8 was the latest version.

Update 2012.11.28
Seems I already had some things installed from some other projects so I did not notice some missing dependencies. Thanks to weapon_S and sorry about that.

Cross Compile Allegro 5 Programs in Linux for Windows

Cross Compile Allegro 5 Programs in Linux for Windows

The Allegro game programming library has released v5 of their popular library and with it comes a whole mess of great changes. Thing is, since most of the applications you make with it are going to be games, your main audience lives in Windows. Since I am really upset with Microsofts offerings in this area I needed a way to capture this audience without having to drive myself insane.

What follows is how I am making Windows executables in Linux using Allegro. Please note that I live in Ubuntu (currently 10.10, Maverick Meerkat). You may have to make some slight changes to fit your distro but that should not be a big deal. These instructions assume a clean installation where no other copy of Allegro has been installed (not sure if that would be a problem or not as I have not tested).

  1. Install the required programs:
    sudo apt-get install cmake mingw32
  2. Retrieve and uncompress DirectX:
    Download and copy the DirectX headers and libraries to /usr/i586-mingw32msvc/. Note the file structure within the archive should compare to the mentioned directory. When prompted to overwrite any files do so but make sure you have a backup first in case something explodes.
  3. Retrieve and uncompress Allegro 5:
    Download allegro-5.x.x.tar.gz from their site. Uncompress some place easy to get to. I used my desktop as we can delete this when done.
  4. Compile from source:
    In a terminal type
    cd [path to uncompressed archive] && mkdir build && cd build && cmake -DCMAKE_TOOLCHAIN_FILE=../cmake/Toolchain-mingw.cmake .. && make && sudo make install
    This may take a little while depending on your hardware.

You should now have a functioning cross compiler setup for Allegro 5. Just replace gcc with i586-mingw32msvc-gcc or g++ with i586-mingw32msvc-g++ (for example, I compiled my first test with i586-mingw32msvc-g++ alleg.cpp -lallegro.dll -lallegro_image.dll -lallegro_font.dll -o alleg.exe). The DLLs you will need are in /usr/i586-mingw32msvc/bin/. You may now delete all of our working files on your desktop (or wherever you put them).

There are still one or two things I need to figure out. For one, dynamically linked programs are peachy on Linux; I am comfortable in my assumption that most people using Linux either already expect this or are willing to learn. Windows, not so much. I want to statically link for that platform but I have yet to experiment with that. Another thing is the fact that my current method has Windows opening up a console window in addition to the “main” window. I am sure this is also very simple but have not yet played with it.

The Social Web

The Social Web

In the past I have mentioned Facebook and related sites. Whenever I have talked about them, however, it has been in a technical capacity. I never really gave much thought to the why.

Very quickly, what could we say about Facebook on a technical note? Well, the site is– or, at the very least, appears– dead simple. Some user sticks some data in a web-based form. It is then stuck into a database for long-term storage. Later, another user wants said data so it is retrieved and displayed. Simple. There is not only nothing wrong with this but I always prefer that everything should be made as simple as possible, but no simpler. So why is Facebook so damn popular if it does not give us anything we did not already have?

The answer is that it does. It gives us something that is harder to measure: easy communication for everyone. Not just for the protocol engineers so speak Nerd, not just for the computer programmers (those handsome devils) who make software, but for everyone. Before the rise of Facebook long-distance communication was more geared towards one-on-one interaction. The telephone (later the cellular phone), e-mail, ect. These were all giant steps forwards but did not easily address “the crowd.” If you wanted to talk to several people on the phone you could need to make several phone calls. There is also a second issue with most communication methods: they happen in real-time. If I want to talk to someone on the phone they need to stop what they are doing to talk back. Real-time is a great goal for most projects but not always the best solution for all. I do not know you about you but my friends get grumpy when they need to, say, stop sleeping because I called them.

So here comes Facebook: A graffiti-tagged wall of whatever. Not only can you communicate with others but you can do it outside of normal business hours and not have the pesky are-they-available dilemmas. It is a mix between instant messaging, Internet forums, and three-way calling all in one. There are no new concepts here but great application of old ones. The why is the community. The why is the emotion.

OK, so I am late in getting my brain wrapped around this. Perhaps it is a serious short-coming of mine but now that someone got me started I am very interested.

Hiding JavaScript? Maybe….

Hiding JavaScript? Maybe….

As anyone around me knows (because I will not shut up about it) I have been working on a new project. Said project relies very heavily on JavaScript and revolves around an unusual use for a web browser that I do not want to advertise just yet. Because of this I have been looking for ways to hide my HTML, CSS, and JavaScript from the client. The short answer I discovered? You can not.

Or can you? Of course if any software is going to run code it will have to have a copy in one form or another. With a scripting language the code is presumably viewable to anyone, right? With JavaScript it is viewable in the View Source option of the users browser which makes everyone from a curious hacker to your grandmother your worse enemy (I love grandma unless she steals my stuff). You can perform obfuscation on your code but that really does not fix the problem; Anyone with half a brain could decode anything the browser can decode because all the tools they would need are already in front of them. What to do, what to do?

Although it does not solve the problem completely I am considering a new project. A project that might hide virtually everything but still allow the browser to render properly. What if this method was inherently cross-platform and completely transparent to the client? What if this method not only offered a developer a lot more security but also provided an API that made web applications stateful with any unmodified, off-the-shelf web server and a lot more efficient on bandwidth?

I may soon start running experiments to test feasibility but I do not foresee any reason my idea would not work. Perhaps this could even be a marketable product…

Developers of Facebook v2

Developers of Facebook v2

It is exactly one year later (which is a bizarre coincidence) and I am once again torturing myself. Facebook, I am told, is gradually changing a lot of its API-related stuff. One of the things that jumped out at me first was its slow migration away from FBML. I always like to control my own stuff as much as I can so I never seriously considered it (they allow you to host your own stuff and stick it in an iframe on their site as an alternative). Still, I think this is a good move for them. It allows finer control for serious developers and– very slightly– stops people from making crap since they need some commitment such as their own web servers before even starting.

Any way, we will see how it goes. If it is anything like last time there will be a lot of kicking and cursing followed by yet another pledge never to use the Facebook APIs again.

Update 2011.04.16
This is still extremely painful. I had told myself over and over again to just stick with it. To just work hard and I will figure it out. Well nuts to that! The tipping point? I just copied and pasted one of their most basic examples and all I can get it to do is sit in an infinite loop that loads the same page over and over. Their documentation looks great until you start to try what they are attempting to explain. They have several concepts that are only explained in hidden links through the manuals which are a bitch to find and when something goes wrong there is no help.

Fuck. This.

Persistent Worlds and Their Storage

Persistent Worlds and Their Storage

Over the past few months I have been putting together an MMO-style bit of software. Since it is more of an experiment than anything else I did not start with a design plan. That is not to say that most things are not planned before hand but I have no idea what will work best so I am trying a number of things off the hip first.

Right now I am working on the basis of what will make it multiplayer. The decision I have to make now is how will the data be stored and how will the clients access it?

  • I could store everything in an SQL database. This is attractive for its persistence and accessibility across multiple platforms and languages. The down side is I can not control what is cached and what is on disk as much as I would like. Every now and again I may take a huge hit in performance as it was not designed for this task. I may hit a bottle neck much sooner in a high concurrency situation than I otherwise would.
  • I could use memcached. This is attractive for the obvious reason: blinding speed. The down side is I would have to do so much more work in code since it does not guarantee stored data would exist when I need it. This increased work could place my bottleneck on my CPU when it is already pretty high from other tasks. I would not know the full effects of this until after the project is mostly complete leaving me in a chicken or egg situation.

I am sure there are many other options. These are the two that seem the best suited for my task right now that I am aware of.

No matter what I do I will build a very light-weight abstraction layer as to switch between different designs quickly. This will save a lot of time later on so I do not have to reinvent the wheel over and over again with each test.

Working Around JavaScript Shortcomings

Working Around JavaScript Shortcomings

I am working on a real-time, JavaScript only project. I do not want to give too much away right now but I will say this: JavaScript was not designed for what I want it to do. The timers are not accurate enough and relying solely on synchronous or asynchronous communication between components simply will not work at all. What it comes down to is this is what C or C++ was meant for.

I have spend hours, today alone and not mentioning yesterday, just reading. Reading on tricks to make your own timers, how Internet Explorer on Windows or Firefox on Linux might react or whatever and how reliably. But I am determined; I am determined to make what I envision work as I envision it with nothing more than what everyones browsers already have. A friend just suggested I use Flash but Flash has way too many issues with performance and cross-compatibility (say what you want I am sticking to that). In two words? Fuck. Flash.

I have written about how anal I am in the past. Especially when it comes to things like this. I refuse to be beaten by a scripting language never-the-less a scripting language built into a God damned web browser. If I may bring my ego into this– too late– it would also be great to be “the guy” who pulled this off. The guy who people copy. The guy who starts a bunch of copycat projects.

I have learned a lot thus far. I am convinced this is very doable. It is all just going to require some research, cursing, work, and cursing. This is going to be great.

The Old vs The New

The Old vs The New

Back in the day computer programming was more of an ordeal than it is now.

A computer programmers job is to take an idea, turn it into a set of instructions, and write code in a programming language that tells the computer what those instructions are. The computer then takes the resulting program and does whatever it was told to do. All of these instructions contained in this program, at their lowest level, boil down to “stick these numbers into these memory locations, do some addition on them, stick the result in this memory location, rinse, repeat.” It often all appears to be much more than that but that is a computer programmers mission in life.

In most older languages– C comes to mind– a programmer had to first allocate the memory they wanted to use. After they were done using it they had to deallocate the memory. This was especially important because memory was expensive and, as a result, computers of the day did not have much of it. It was common for tricks, or hacks, to be used that were not ideal solutions because there was simply no other way with such a limited resource.

Fast forward to years ago when I first started with this discipline. The people who I had learned from– both in the form of teachers but much more commonly in the form of over-the-Internet searching– were the product of this era. As a result I have an almost religious obsession with managing resources myself. I tend to shy away from libraries that do a lot of the work for me simply because I can not see what they are doing, how they are doing it, and when they are doing it. In addition I do not control their resource usage which just irks the crap out of me. In an age when desktops with multi-gigabyte amounts of memory can be had for under $300 is this still a logical habit?

Many languages today have a feature called garbage collection. Very popular and common languages like PHP, Java, and the various .NET Framework languages all implement this. Not only do they implement it they nearly demand you rely on it. This always seems like the job of a programmer to me and not the machine. After all, the program needs to, in addition to whatever code I write, keep track of all memory allocated, variable types, scope of said variables, ect for garbage collection to work. It just seems like a lot of work being done using a lot of resources I could easily use somewhere else and a bad case of wag the dog. It is also worth mentioning that none of the above languages come anywhere near the speed and efficiency of C.

What brings all this up is a project I have had in the back of my mind for a year now. This project requires a lot of JavaScript. JavaScipt, for anyone who is unfamiliar, has a surprising amount of functionality missing from it. It can not, for example, trim excess whitespace from a string nor can it pad a string. These would be common programming tasks which you would have to write yourself.

I want to do something more complex than basic string manipulation. I want to create a scrolling, Google Maps-esk interface among other things. I could write all of the code myself (which I already did for some of another project) or I just could use a library like jQuery. jQuery has a lot of the smaller bits of what I need to do already done. Not only that but so many people who use it that one can safely assume that most of the bugs I would be likely to hit would already have been ironed out. It really seems like a win-win for both jQuery and myself.

Perhaps Jeff Atwood makes a good point when he says I am dangerous. Perhaps I have already answered my own question just by writing about it. The logical solution is to use the time-tested, rave-reviewed jQuery but there is just something inside me that does not want to give up the control. It just seems… wrong. I should be able to control every aspect of my software as to make it as efficient as possible. I should put in the work and have everything the way I know it should be without any questions about the guts or inner workings of something. On the other hand if I were a mechanic I would not be building my own combustion engine just because I could…

… I should stop being such an anal control freak.