A Snappier MindNode Part 1 - Performance Tuning
At first blush, the last couple of MindNode releases may have seemed minor - focusing on bug fixes and smaller improvements. Though if you dig deeper, and take a closer look, they have been huge. A lot has changed under the hood and depending on the size of your documents and the tasks you usually perform, you might have noticed that MindNode now feels snappier, more responsive and overall much faster than previous versions of MindNode for iOS and macOS.
In this blog post, we will take you through some of the changes to MindNode's code base and architecture, the changes that allowed us to make big performance improvements during document opening, text editing, node dragging and node resizing. Before we start, let's take a look at the final result.
Common Performance Problems
Software today is very complex. There are many factors that influence both the measurable, as well as the perceived performance of the software we all use and love. In fact, performance tuning is a common task for many software engineers - and it can be equally frustrating as well as very satisfying. Finding the fixes for performance issues can sometimes feel like the famous search for the needle in the haystack. While issues might be complex and very diverse, the root source of many performance problems falls into one of a few buckets. Let's take a look at the most common problems and some ways to solve them.
Batch Processing & Deferring
Being lazy might not be a trait, that you'd attribute to fast and snappy software. Funnily enough, it can be one of the most effective solutions to performance problems.
Whenever a result is computed, which isn't actually needed at this point, unnecessary work is done that slows down the device. So being lazy is a good thing - the goal should be to only compute what is actually needed. While this may sound trivial, it's far from trivial in big and complex software with ten thousand lines of code or more. At this size, it can become hard to reason about the code, especially if functions have side effects.
One common example of potentially unexpected side effects can be seen when using Key-value observing. Key-value observing, or KVO, provides a mechanism that allows objects to be notified of changes to specific properties of other objects. Speaking in simplified terms, in Object-oriented Programming we try to mimic real-world objects. These objects can have certain properties and defined behavior, just like real objects. An object car can have properties like color, amount of fuel and max speed and behaviors like accelerate() and break(). Whenever the amount of fuel of our car drops below a certain threshold, we would want to know. KVO is a technology that allows us to observe the amount of fuel of our car, and get notified when it changes.
In MindNode we use KVO a lot. Whenever, for example, you change the color of a node in the inspector, the canvas gets notified of this change and knows that it needs to redraw the node with the new color - exactly what we want. But what if we change the color of many nodes at the same time, like for example when we apply a new theme? The way KVO works is that we get notified about the color-change of every node in our document, potentially hundreds or thousands. If we now redraw the canvas on every change, this is a lot of redundant work. It would be much more efficient to first change the color of all nodes and then redraw the canvas only once.
This is exactly the approach that we took in many places lately. By reducing KVO, providing a batch API and deferring work to a later point, we were able to speed up many operations like dragging around nodes on the canvas.
There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors.
– Phil Karlton, Leon Bambrick
A cache stores (usually expensive-to-compute) data, so that future requests can be served faster. In its essence, using caches is another way of being lazy. Instead of computing a result every time we need it, we can cache the result and instantly return it on subsequent requests. Caches can dramatically improve the performance of many tasks, but they come with their own problems. Whenever you cache a result it increases your memory consumption. The result can become outdated. So you need to make sure to correctly invalidate your cache, or you'll be served wrong results.
Ashton is our open source solution to convert between rich text and HTML and it's used heavily during saving and opening of documents. It can convert between an NSAttributedString, the object that is used to represent rich text in iOS and macOS, and an HTML representation of the same text. This conversion is rather costly, so we looked into ways to speed it up.
A node's title, converted by Ashton, can look like this:
<p style='color: rgba(109, 109, 109, 1.000000); font: 20px \"Helvetica\"; text-align: left; -cocoa-font-postscriptname: \"Helvetica\";'>What will make this week a success?</p>
While parsing the HTML description is slow and expensive, in a typical MindNode document many nodes look the same. The text itself, which is unique for this node, is "What will make this week a success?". Everything else describes the formatting of the node - which is exactly the same for many other nodes in the document. Once we realized this, we were able to speed things up with a cache by using the whole formatting string style='color: rgba(109, 109, 109, 1.000000); font: 20px \"Helvetica\"; text-align: left; -cocoa-font-postscriptname: \"Helvetica\";' as the key for our text formatting cache.
If every node in a document is formatted differently this cache is useless and actually slows down document loading. For typical documents, however, our tests showed a 25-30 % speed increase when opening a document.
If You Can't Measure It, You Can't Improve It.
These were just a couple of changes, that we performed lately. By focusing on performance and bug fixes we were able to make MindNode's code base ready for MindNode 6 and the upcoming challenges. Rest assured that along the way, we have been working on some amazing new features, that we plan to release this year.