Taking the Swift Plunge

Okay… So… I think I’m finally ready to take the plunge and start learning the Swift Programming Language in earnest. I’ve been watching it and starting with version 5 there is finally enough there for me to get excited about. But this isn’t going to be easy for me. Not like it was when I was in my late teens (17-19) when I learned Basic and assembly language, my 20s when I learned C/C++, Pascal, Fortran, Perl, and VisualBasic (just to name a few), or even my 30s when I learned Java, Javascript, SQL, and PHP (just to name a few more). I just don’t have the free time that I use too. It took me almost my entire 40s to learn and master Objective-C! So you can imagine my hesitation when I learned that Apple was coming out with a new language to replace Objective-C altogether. ūü§¨

If I wasn’t so interested in developing Mac, iPhone, iPad, Watch, and TV apps then I wouldn’t bother. But after two decades of being a network and back-end systems programmer (what’s come to be known as a “Full Stack” developer) I’ve really just grown tired of it and want to do something new. On top of that the language I’ve primarily devoted the last 20 years of my life to has grown long in the tooth. Like many Java developers I’ve waited year after year for the platform to really become all that it promised to be. But, instead, it seems to have drifted off into this abyss waiting for anyone to notice that it’s finally dead. I mean, come on! It’s been 25 years or so since Java came out and we are JUST NOW, in version 12, getting a run-time that will release unused heap memory back to the OS!

It’s kind of like Elvis. We were promised “young Elvis” who would simply get better with age and constantly push the envelope of music giving us ever newer masterpieces of auditory joy. Instead we’ve now got “old, fat Elvis” still singing the same tunes that made him famous 25 years earlier gyrating around on the stage as buckets of sweat pour off of his body. Then, to add insult to injury, the women (think middle managers) still swoon for him like they’ve all been hypnotized into still seeing him as his younger version. And, just like with Elvis, one day soon we’ll hear that Java has been found dead. Suffering a fatal SEGFAULT while trying to grunt one out on the toilet. And, except for the few lunatics who swear he’s not really dead, all we’ll be left with is a massive stack dump!

Some people might ask why I don’t go with something like C# and Xamarin so I can develop for both iOS AND Android at the same time. Well the answer is actually simple. I absolutely detest Microsoft (and C# is a Microsoft invention and Xamarin was purchased by Microsoft), Swift is open source (yes you read that right) which C# isn’t, and developing for Android does not interest me in the least. For one thing with Android you get, well, phones and cheap iPad knockoffs and that’s about it. I also want to develop for Apple computers (macOS), Apple TV (tvOS), and Apple watches (you guessed it…watchOS). So with Swift I get to do all four platforms.

Also, as an added bonus, Swift exists on Linux platforms too!!!

So, as I learn Swift I’ll keep posting here as time permits with all the latest on my adventure. ūü§™

Always an Easier Way

Previously in¬†Avoiding an Objective-C Pitfall¬†#1¬†I discussed a more stable way of creating singletons in Objective-C. ¬†As with all things in the world of Apple there’s always an easier way and that way comes to us via two very powerful yet unnoticed (in the Windows and Linux communities at least) APIs that Apple has contributed to the development community. ¬†The first is a library that Apple developed to make multithreading safer and easier in a way that no other library ever has. ¬†It’s called Grand Central Dispatch (a nod to New York City’s famous Grand Central Station) that reintroduces threads in terms of dispatch queues. ¬†If you have never heard of it I highly recommend getting to know it. ¬†Versions of it also exist for FreeBSD, Linux, and Windows.

The second is actually an extension to the C language that Apple submitted that is a default part of the clang/llvm C/C++/Objective-C compiler (the default compiler for Apple) called Blocks. If you’ve ever worked with closures in other languages such as Javascript then Blocks will instantly be familiar to you. ¬†In fact, from here on out I will refer to these code sections as closures. ¬†As with GCD, Blocks are supported on FreeBSD, Linux, and Windows.

So let’s get right into how all of this makes things easier in the realm of singletons. The first thing I recommend doing is creating a simple function to return a shared instance of a serial dispatch queue. ¬†Yes, a singleton! ūüßź

Screen Shot 2018-09-10 at 10.53.53 AM

Here we see things in action right off the bat. We use Grand Central Dispatch’s dispatch_once(…) function to create a single serial dispatch queue. Notice right away that there are no locks or @synchronized(){} blocks anywhere to be found. ¬†That’s because the contract for the dispatch_once(…)¬†function says that in the event of multiple threads calling it at once, only one thread will execute. ¬†All other threads will be blocked until the currently executing thread completes. ¬†And, in this case, the variable _serialQOnce acts as a predicate that will keep the closure from being executed more than once. ¬†Initialized to zero, dispatch_once(…) will set that variable to a non-zero value upon completion. The body of the closure performs the actual creation of the singleton.

The reason we’re doing this first is because serial dispatch queues are extremely handy and this gives you a single, static queue that isn’t one of the system “global queues” that could become very busy. ¬†The one produced by this function will be just for your application and your application only.

So let’s move on to using a serial dispatch queue to create an improved singleton factory for a class. ¬†Serial dispatch queues have a unique property in that blocks placed on them are executed sequentially with respect to each other but asynchronously with respect to other queues (even other serial queues) and threads. ¬†Boiled down this simply means that each item pulled off the queue (in FIFO order) is allowed to fully complete executing before the next item is pulled off the queue. ¬†This is in contrast to a concurrent queue in which items are pulled off, started executing, and then the next item is pulled off and started executing without waiting for the previous item(s) to complete.

Think of it this way, suppose you have a corn maze that you let children run through. ¬†Now imagine a line of children (your queue) waiting to enter the maze. ¬†In a concurrent queue you would allow the children to enter the maze one at a time but you don’t wait for that child to finish the maze before you let the next child into the maze. Perhaps you separate them by only one second. The end result is that, depending on the child’s abilities, such as how fast they can run, the first child into the maze may not be the first one out of the maze. ¬†In fact the first child in might even be the very last one out!

In a sequential queue however, you still have your corn maze and your line of children (your queue) but this time as you let a child in you wait for them to complete the maze before you let the next child in. ¬†This has the effect of guaranteeing that the first child in is the first child out and, by extension, the last child in is the last child out. ¬†But it also has the effect that each child is alone in the maze and able to complete it entirely at their own pace – they’re not going to get run over by another child. ¬†This is the effect that we are going to take advantage of.

So let’s look at the code…

Screen Shot 2018-09-10 at 11.36.36 AM

It looks almost the same except that the body of code that use to be inside a @synchronized(){} block is now inside a closure that is executed on a serial dispatch queue. ¬†The dispatch_sync(…) function (the “sync” has nothing to do with the fact that we’re using a serial queue) will wait until the closure provided has completed executing before it returns. This guarantees that the _instance variable will be populated when the function completes. The “__block” storage modifier on the _instance variable simply allows it to be modified from inside the closure. ¬†Also any concurrent calls to this method are guaranteed to happen sequentially so that two instances of the same class are not created at the same time.

So why is this better? Well, that depends on your point of view I suppose.  But for me it eliminates the @synchronized(){} construct that is known to have a rather high time cost.  Also I feel it looks, overall, a little more elegant.

Objective-C and Exceptions

The formatting really didn’t copy over well. ¬†Read the actual page here:¬†https://github.com/GalenRhodes/LockSpeedTest/blob/master/README.md


My current use of this is to test if there was any overhead involved with using the clang -fobjc-arc-exceptions flag to ensure that memory isn’t leaked when exceptions are thrown in Objective-C.

Basically I compiled this test without -fobjc-arc-exceptions and ran it and then compiled it again WITH -fobjc-arc-exceptions and then ran it again. Each time it runs a simple test where it throws an exception with a variable with locality inside a @try {} block is assigned a newly created object. It does this 2,000,000 times and takes an average of the time to complete each loop (total_time / iterations).

#define _ITERATIONS_ ((uint64_t)(2000000))

-(NSUInteger)testARCExceptions {
    NSString *str = nil;
    NSUInteger iter = _ITERATIONS_;
    NSUInteger throwWhen = (iter - 1);

    for(NSUInteger i = 0; i < iter; ++i) {
        @try {
            TestClass *test = nil;

            test = [[TestClass alloc] init];
            str = [test buildString:i];

            if(i == throwWhen) {
                @throw [NSException exceptionWithName:NSGenericException reason:@"No Reason" userInfo:@{ @"Last String":str }];

            [PGTestMessages addObject:@"--------------------------------------------------------------"];
        @catch(NSException *e) {
            [PGTestMessages addObject:[NSString stringWithFormat:@"Exception: %@; Reason: %@; User Info: %@", e.name, e.reason, e.userInfo]];
    return iter;


The class TestClass is just a very simple class. The important bit is that it logs a message when it is both created and deallocated.

-(instancetype)init {
	self = [super init];
	if(self) {
		_instanceNumber = [[self class] nextInstanceNumber];
		_instanceName   = [NSString stringWithFormat:@"%@:%@", NSStringFromClass([self class]), [[self class] formattedInstanceNumber:self.instanceNumber]];
		[PGTestMessages addObject:[NSString stringWithFormat:@"Instance %@ created.", self.instanceName]];
	return self;

-(void)dealloc {
	[PGTestMessages addObject:[NSString stringWithFormat:@"Instance %@ deallocating.", self.instanceName]];


Then when the log is printed after the test you can see each instance of the TestClass being created and deallocated. Even better you can see the effect of the -fobjc-arc-exceptions flag at the end of the output.

Without -fobjc-arc-exceptions

"Instance TestClass:1,999,997 created.",
"Instance TestClass:1,999,997 deallocating.",
"Instance TestClass:1,999,998 created.",
"Instance TestClass:1,999,998 deallocating.",
"Instance TestClass:1,999,999 created.",
"Instance TestClass:1,999,999 deallocating.",
"Instance TestClass:2,000,000 created.",
"Exception: NSGenericException; Reason: No Reason; User Info: {\n    \"Last String\" = \"abnegating abnegation abnegations abnegative abnegator abnegators Abner abnerval abnet abneural\";\n}"

With -fobjc-arc-exceptions

"Instance TestClass:1,999,997 created.",
"Instance TestClass:1,999,997 deallocating.",
"Instance TestClass:1,999,998 created.",
"Instance TestClass:1,999,998 deallocating.",
"Instance TestClass:1,999,999 created.",
"Instance TestClass:1,999,999 deallocating.",
"Instance TestClass:2,000,000 created.",
"Instance TestClass:2,000,000 deallocating.",
"Exception: NSGenericException; Reason: No Reason; User Info: {\n    \"Last String\" = \"abnegating abnegation abnegations abnegative abnegator abnegators Abner abnerval abnet abneural\";\n}"

You can see that without the -fobjc-arc-exceptions flag (the first example) the last instance is never deallocated because the exception is thrown. The next example with the -fobjc-arc-exceptions flag clearly shows that code was added to clean up the objects. The line of bars is not printed between the creation and deallocation because the exception was thrown before execution reached that point.

The result? No difference. None. The numbers I’ve generated (running it three times each way and taking the average) show that if there is a difference it’s only by a 140 nanoseconds at best. And, I should point out, that’s a 140 nanosecondsFASTER than without using -fobjc-arc-exceptions!

Run Without Flag With Flag
1 12,720.9940 ns 12,391.1695 ns
2 12,440.9565 ns 12,268.1410 ns
3 12,195.2575 ns 12,267.5080 ns
Average 12,452.4027 ns 12,308.9395 ns

All of this leads me to wonder why Apple would have chosen to NOT make this the default when building their own Frameworks. Anyone with some deep knowledge care to chime in?

It’s All About the Tools Baby!

I originally posted this on LikedIn back in June, 2017.

My high school shop teachers use to drill it into me and my fellow students every day. At least once every day I’d hear one of my shop teachers tell someone in the class, “Use the right tool for the job!” Usually this happened just as someone was about to use a screwdriver as a chisel, use some part of anything as a hammer, or attempt to “eye ball” a line on a mechanical drawing without using a T-Square. To this day, anytime I’m about to use a butter knife as a screwdriver I hear one of their voices in the back of my head.

As a software developer I use tools everyday. And I’m always on the lookout for new tools to help make my job writing programs easier, better, faster, and with fewer bugs. Back when I first started writing code there were very few tools available. Writing code involved running a line editor (what’s WYSIWYG?) to edit your source code, then exiting out of the editor (did I remember to save first?) and running the compiler (time to get coffee), then if it compiled successfully (doh!) running the linker, then finally, if the linking process succeeded, running the actual program to test it. If there was a bug (DOH!) then you’d have to go back and start the whole process over again.

All of this started to change in the mid- to late-1980’s when Integrated Development Environments (or IDEs) started emerging. Programs such as the Merlin Assembler and Turbo Pascal that allowed you to edit source files, compile, link, and debug all from the same application. These days we have fabulous IDEs such as the open-source Eclipseand commercial applications like IntelliJ, CLion, AppCode, and Xcode. These IDEs come packed with incredibly useful features like source code syntax highlighting, automatic formatting and indenting, powerful editing, project wide visual search and replace functions, and fantastic refactoring tools such as being able to rename an identifier (variable, function, etc.) in one spot and having it instantly change everywhere it’s used in any file in the project. And thanks to the profiling and static code analysis tools available for Java and C/C++ these IDEs will even show you your mistakes WHILE your making them! Forget a semicolon and they will highlight it in the editor for you right away. I was blown away the first time an IDE highlighted a section of unreachable code even before I noticed it (something I’m usually pretty good at).

In my professional life, however, I’m blown away by a couple of things. First, the number of other developers who are completely unaware of the number of tools available to them (I’ve even met a few who still use notepad to edit HTML and XML files) and, second, the number of companies who are adverse to letting their developers use those tools even if they do know about them. Even the free ones!

You see, one of the other valuable life lessons the shop teachers in high school taught me was to learn all about the tools that we had available to us – even if we thought we’d never use them. One of my teachers would even bring in catalogs and point out interesting tools that you could get to help you do common jobs faster. That was where I first learned that you could buy “jigs” to help you with everything from building window frames to helping you know exactly where to drill the holes for a door handle so that everything lined up properly.

My teachers felt that while it was important for me to learn to do things the “old fashioned way” or the “long way” (like learning proofs in mathematics), in order to be successful and productive in the real world you needed to learn to use tools that would allow you to be faster, safer, and more accurate. Or better yet, as my metal shop teacher did at least once, dream up and build your own tools. After all, if your building houses for a living why not use a jig to put in the door handles and save yourself hours of time. I doubt you’d see a carpenter today building a house with a hammer and a hand saw when they could go out and buy themselves a nail gun and a power saw. My shop teachers where some of the first people in my life to emphasize to me that time really is money.

That’s why it baffles me when I see another programmer who doesn’t seem to know that you can simply hit control+shift+F in Eclipse to reformat and entire source code file (or that well formatted source code is important to begin with) or a company that refuses to even consider their developer’s requests for certain software tools. There have been so many times in my career that I would have loved to use WinMerge to compare a set of files for differences but some of the companies I’ve worked for won’t allow it even though it’s free.

The issue of a developer being unaware of the available tools is a topic for another article I’m working on regarding people who are passionate about their job versus those that are simply “paying the rent.” Look for that article soon.

But the reluctance of companies to invest in or allow the tools is pretty easy to understand if you look at it from the companies point of view. In this day of data breaches, ransomware attacks, and multimillion dollar fines for unlicensed software, companies are very wary to allow anything to be installed on corporate computers that hasn’t been completely vetted for safety. And I can’t say I blame them. But some companies seem to have taken this stance almost to the extreme even to the detriment of their developers who would benefit from having the right tools. Some companies do have some sort of “suggestion box” approach allowing employees to request certain software so that it can at least be considered. Many companies, however, still have a “NO” standard answer preferring their developers to simply “make do.”

There’s also the issue of encouragement. Companies, mostly those that aren’t software companies in the first place but are large enough to have their own IT departments to support them (like retailers, manufactures, banks and other financial companies), don’t always encourage their employees to enhance their skills or if they do it’s only half-hearted at best. Some will offer free online courses thru sites like Lynda.com but employees must do so on their own time and only a few might offer tuition reimbursement. And at the end of the day there’s virtually no impact to the employees if they do or don’t enhance their skills. Good grief, just trying to get a company to change development processes from waterfall to agile is headache. You might as well be asking everyone in the company to spontaneously change religions!

The downside for all of this that the management of companies should consider is best put in terms of a hospital. If they were in need of emergency, life and death medical care, would they want to go to a hospital that was stuck in the 1990’s because they didn’t want to spend money or take the time to consider new equipment and training and instead preferred their doctors and nurses simply “make do” or would they prefer to go to a hospital that had modern up-to-date equipment and staff that was encouraged to be knowledgable in using those tools and looking for new ones?

But, at the end of the day, it’s still about time and money. Management should always be looking for ways to improve efficiency and the right tools, even if you have to spend a little money, will make the right employee faster, better, more accurate, and happier. And that always equals a healthier bottom line.

Avoiding an Objective-C Pitfall #1

Below is a very commonly used design pattern in Objective-C.


It’s the typical¬†Objective-C “Singleton Pattern” because it does just that. ¬†Returns a singleton of the class it belongs to. ¬†If we take a look at an example that uses it we can see it in action.

Screen Shot 2017-05-16 at 1.07.57 PM

The¬†“@synchronized(self)” statement ensures that only one thread at a time enters that inner block and keeps more than one copy of the class from being created. ¬†Without it¬†two or more¬†threads might just happen to get past the “if(_inst == nil)” statement before one of the threads manages to complete the assignment to the static local variable “_inst”.

If you run this program you get the output you’d expect:

Screen Shot 2017-05-16 at 1.22.27 PM

The class “Test1” created exactly ONE instance of itself, stored it in the static local variable “_inst” and then returned it. ¬†If the program had called the class method a second time it would have gotten the exact same instance that was created in the first call like so:

Screen Shot 2017-05-16 at 1.28.55 PMScreen Shot 2017-05-16 at 1.29.08 PM

All well and good! ¬†Now, let’s add a little wrinkle to the situation by creating a subclass of “Test1”:

Screen Shot 2017-05-16 at 1.08.34 PM

In this example, “Test2” inherits directly from “Test1”. ¬†It’s very simple and doesn’t even override any methods. ¬†We’ve also set the second instance object in the main function, “o2”, to get the result of calling [Test2 instance] instead of [Test1 instance]. ¬†But since we didn’t override anything it’s still really calling the same method found in “Test1”. ¬†Now, let’s run it and see what output we get.

Screen Shot 2017-05-16 at 1.41.44 PM

Now, for some of you, this is not what you were expecting to see. ¬†You were probably expecting to see “Returned Class 2 = Test2“. ¬†However the more I thought about it this was exactly what I was expecting to see. ¬†The reason for this is simple. ¬†Static local variables¬†are scoped to the method in which they are created. ¬†Since this was a class method (belongs to the class definition rather than instances of the class) it belongs to the class in which the method was defined. ¬†Bottom line this would have been EXACTLY as if I had coded it as follows:

Screen Shot 2017-05-16 at 3.19.15 PM

Notice that the static local variable is now your everyday, garden variety global variable. ¬†The only difference now is that from a scope perspective it can be “seen” from the entire program instead of just inside the [Test1 instance] method.

Coincidentally, if you reverse the two calls to the “instance” method you’ll get instances of “Test2” instead of “Test1”.

Screen Shot 2017-05-16 at 3.45.09 PMScreen Shot 2017-05-16 at 3.45.22 PM

Why didn’t an instance of “Test1” get created? ¬†Because we called [Test2 instance] instead of [Test1 instance]. ¬†But why since we didn’t override that method in “Test2?” ¬†For a better explanation of meta-classes in Objective-C I refer you to this excellent blog post: What is a meta-class in Objective-C?

But there’s also another more problematic issue with this test case. ¬†That “@synchronized(self)” statement? ¬†It’s rendered completely useless in our example if [Test1 instance] is called from one thread and [Test2 instance] is called at the same time from another thread. ¬†The reason is evident in the second line of the output. ¬†Even though the method belongs to the class definition for “Test1” we’re calling it from the scope of the subclass definition “Test2”. ¬†(Again, read the blog post at the link given above.) So if there are two threads and one thread calls [Test1 instance] and the other thread calls [Test2 instance] that synchronization lock will be locking on two entirely different objects because the “self” properties in both calls will be pointing to two different objects. ¬†Bottom line:¬† you could end up with an instance of the wrong class than what you were expecting.

Below is one solution albeit not the optimal one. ¬†We simply override the class method in “Test2.”

Screen Shot 2017-05-16 at 1.59.34 PMScreen Shot 2017-05-16 at 2.09.22 PM

Why isn’t this the optimal solution? ¬†Because you have blatant code duplication. ¬†Not just within the program as a whole but even worse, given that we’re talking about Object Oriented Programming, within the a subclass!

No, a better solution is needed. ¬†And one that I’ve come up with in my own code is only slightly more complicated but, I think, solves the problem very nicely without adding much memory or CPU time overhead. ¬†Let’s look at the newly refactored “instance” method.

Screen Shot 2017-05-16 at 4.16.12 PM

We will now store the singleton instances in a dictionary using the class name as the key. This ensures that if we call this method using [Test2 instance] we get an actual instance of “Test2” and if we call it using [Test1 instance] we get an actual instance of “Test1”. ¬†Moreover we still only create just one instance of any given class.

Finally we change the object being synchronized on to simply the static base class, [Test1 class], rather than allowing it to be based on the “self” property.

So, after a quick change to the “main” function to test the results.

Screen Shot 2017-05-16 at 4.24.04 PMScreen Shot 2017-05-16 at 4.24.30 PM

And you can now see that the results are more inline with what you would expect without having to duplicate code in the subclasses.


Thoughts on GNUstep

For those that don’t know GNUstep is a project that started many years ago to help bring Objective-C to the masses using operating systems other than Mac OS X macOS. ¬†Actually, Objective-C already existed on any platform that had access to the GCC compiler suite but what GNUstep sought to do was bring the primary frameworks (libraries) that were being used on¬†macOS to make porting Mac applications to other platforms a lot easier. ¬†In particular we’re talking about the Foundation and AppKit libraries.

For the most part they’ve nailed it on the head for simple applications and the Foundation library seems rock solid and fairly up-to-date with the Apple versions. ¬†In fact the Foundation library seems to be up-to-date with at least macOS 10.9 and some of 10.10.

That said, there are a couple of places where people, including me, are having serious problems with GNUstep. ¬†I have to add at this point that these are constructive criticisms! ¬†The GNUstep folks had a definite goal in mind when they started and I have to say that they hit the nail on the head with this. ¬†Also, like many open source projects, it’s an all volunteer army where people have day jobs and others simply lose interest for other projects. ¬†In fact, the introduction of Swift has definitely contributed to that last point – many people have given up on Objective-C and jumped on the Swift bandwagon.

1. They Keep Breaking It

This mainly applies to the AppKit libraries when trying to use Objective-C 2.0 features such as the non-fragile ABI and ARC. ¬†If you want to build your Objective-C 2.0 projects with the non-fragile ABI then you’re not going to be able to use the AppKit libraries. ¬†Otherwise you’re applications will crash with a notice about mismatched ABIs. ¬†The solution is to build without non-fragile ABI support. ¬†This is not always desirable.

2. Strict Adherence to macOS Paradigms

Apple’s operating systems (macOS, iOS, watchOS, tvOS, etc.) all have this paradigm where basically everything is a directory. ¬†Especially Frameworks and GUI Applications. ¬†GNUstep tries its best to replicate this paradigm on other platforms but the truth is that it just isn’t necessary and adds a huge amount of complexity to the build process and the resulting product. The paradigm works very well indeed on Apple’s operating systems because there is support for the paradigm built right into the operating systems. ¬†But on other operating systems such as Windows and Linux this paradigm doesn’t work so good. ¬†In my opinion, and I’m not alone, it would be best to simply follow the best practices of those other platforms rather than trying to hammer a square peg into a round hole. ¬†This applies to packaging too. ¬†The default package manager for each platform should be used. ¬†It’s less confusing for users of those platforms.

2.1. This Applies to GUI Applications Too

The GNUstep GUI applications ALL look like they were all written about 30 years ago.  They use concepts and HID guidelines that harken back to the NeXT operating system from whence everything is modeled from.  Even Apple has dropped most if not all of these old HID paradigms in favor of ones that are more modern and practical.  I mean, honestly, who has main menus that just hover somewhere detached from the main window of the application.  Really?

Yeah, okay, I know – GNUstep has this neat pluggable theme system and can be customized but 1) this is highly problematic (see point #1 above) and 2) it is not well documented – if at all. ¬†And if it is documented you can’t find it unless you spend years scouring the web and happen to stumble across it like I did when I found this some FOUR years after it was first posted.¬†(scroll all the way to the end of the post to “Step 15:¬†Making it Look like a Linux Application“)

3. Makefiles

Everything should be moved to CMake. ¬†’nuff said.

4. Distribution

There is no support in GNUstep for packaging your applications for distribution to system that don’t already have the GNUstep libraries on them. ¬†I’ve seen ONE very outdated partial document describing one way to do it but that’s it.

How to Fix This

In an upcoming post that I’ll be sure to link to this one I’ll outline some steps for addressing these issues. ¬†At least a couple of them I have already started work on:¬†https://github.com/GalenRhodes/Rubicon


Identifying the Odd Electronic Component

If you ever come across an electronic component that looks like this (a small electrolytic capacitor or radial inductor) with simply the letters “HDX” and rattles if you shake it (even a little bit) then I’ll save you some trouble. ¬†It’s called a “vibration switch”! ¬†And, as you might guess, it’s a SPST switch that closes it’s contacts when it senses vibration. ¬†These are not as sensitive as a G-Force sensor but they can do the trick in telling something when it’s being picked up or moved around.