GJS: What’s next?

In my last post, I went into detail about all the new stuff that GJS brought to GNOME 3.24. Now, it’s time to talk about the near future: what GJS will bring to GNOME 3.26.

Javascript engine

The highest priority is to keep upgrading the Javascript engine. At the time of writing, I’ve got SpiderMonkey 45 almost, but not quite, working, and Mozilla is on the verge of releasing the standalone version of SpiderMonkey 52. If we can get there, then we’ll finally be on a supported release which means we can have a closer collaboration with the Mozilla team. (During the past six months, they’ve been patient with me asking questions about old, unsupported releases, but it’s not fair to ask them to continue doing that.)

I plan to upgrade to 45 but not merge it, and then immediately continue upgrading to 52 on the same branch, then merge it all in at the same time. That way, we won’t have an interregnum where everyone has to build SpiderMonkey 45 in JHBuild and Continuous and the Flatpak SDK. Subscribe to bug 781429 and its offshoots if you want to follow along.

The main language features that this will bring in are: classes (45) and ES7 async/await statements (52). At that point, the only major ES6 feature that we will still be missing is modules.

ES6 Classes

After that is done, I will refactor GJS’s class system (Lang.Class and GObject.Class). I believe this needs to be done before GNOME 3.26. That’s because in SpiderMonkey 45, we gain ES6 classes, and I want to avoid the situation where we have two competing, and possibly incompatible, ways to write classes.

ES6 Modules

Full ES6 module support is still missing in SpiderMonkey 52, but at least some parts of it are implemented. I’ll need to investigate if it’s possible to enable them in GJS already. Although, we will definitely not enable them yet if there’s no way to keep the existing modules working; we don’t want to break everyone’s code.

Developer tools

Next comes a debugger. There are not one, not two, but three existing implementations of a GJS debugger sitting unattended in Bugzilla or a Git branch. None of them will apply to the codebase as is, so my task will be to fix them up, evaluate the merits of each one, and hopefully come up with one patchset to rule them all.

Christian Hergert is planning to add a profiler, so that you can profile your Javascript code with Sysprof, inside Builder.

Documentation

I would very much like to get the GJS documentation browser back online. I hosted it on EC2, but I have run out of free hosting. If you have a server where it can be parked, let me know! (It’s a web app, not static pages, so I can’t just put it on GitHub Pages.) If you want to run the web app locally yourself, you can find instructions here for how I set it up on EC2, on a RHEL 7 box.

Misc.

All that is probably more than I’ll have time for, but here are some of the things that I’d like to get done after that:

  • Update the tutorials on developer.gnome.org to use more modern GJS
  • Better integration with Builder
  • Use structured logging to clean up the “debug topics” mechanism
  • Reduce the list of unreviewed patches in Bugzilla down to 0
  • Find ways to bring in some of the conveniences that Node developers are used to

Chun-wei Fan is working on converting some of the codebase to use C++ smart pointers so that we get the memory leak safety advantages of g_autoptr without losing portability to MSVC.

Build system

The question is inevitable: are we going to switch the build system to Meson? I’m looking forward to it, but no, not until Meson is more mature and some of the open questions about distribution and autobuilding have been answered.

Help!

I think it’s great that once I started contributing, other people soon started contributing too. The 1.48.0 release had way more patches and contributors than 1.46.0, even if you don’t count all the stale patches that I souped up. GNOME’s #javascript IRC channel is starting to be a lively place, compared to how deserted it was last year.

What I’d most like to encourage is for more people to contribute major features so that the above list doesn’t read like a to-do list that’s mostly for me. I’m happy to provide guidance. I think it would be great for GJS to become a more competitive development language for apps using the GNOME technology stack1 and we won’t get there with just me.

Another way you can help is by using the development version of GJS while developing your apps or GNOME Shell, thereby helping to try out the new features. We had some serious bugs up to, or even past, the last minute in GNOME 3.24, and this seems like the best way to prevent that.

Finally, you can help by sharing your experiences with GJS: good and bad. Talk on the mailing list or IRC, or file a bug on bugzilla.gnome.org if there’s something wrong.


[1] In that regard I’d love to prove wrong Michael Catanzaro’s opinion about using GJS: “there’s no way to change the reality that JavaScript is a terrible language. It has close to zero redeeming features, and many confusing ones.” There is a way! In my opinion ES6 and ES7 have gone a long way towards filling in those potholes. To name just a few, arrow functions mean you can almost always stop caring about the pitfall of what this refers to, and the prospect of doing asynchronous I/O with Promises instead of callbacks actually makes me want to use JS. Of course, in-browser JS is still a terrible language because it has to support the lowest common denominator of Javascripts so that people who haven’t upgraded their browser since Internet Explorer 8 can still visit your website, and that’s why modern web developers preprocess and transpile it to high heaven. But we don’t have to care about all those browser users!

Healthy Code

I work on a team of software developers that maintains several large codebases — too much code for any one person to easily know what’s going on in every part of it at any particular time. I found myself thinking a lot about how to keep the code healthy and a while ago I set my thoughts down as a list of good practices. Thanks to my coworkers at Endless for input, editing, and debate.

The good practices in this post differ slightly from the ones we adopted at work, which reflect the opinions of the whole team; these are worded to reflect my personal opinions.

Assumptions

I don’t like rules without a rationale. I believe these six assumptions underlie the rules that I set out below. That is, if you don’t agree with these assumptions then you probably won’t agree with the rules… ☺︎

  1. We can never know that our own code is correct.
  2. Left unchecked, we will believe our own code to be correct.
  3. Even small mistakes can lead to catastrophic data loss.
  4. Non-trivial programs have interconnections too complex to keep entirely in one person’s mind.
  5. Modifying non-trivial programs will break code unrelated to the modifications.
  6. The business value of maintainable code is only visible to developers.

Good practices for code health

Use your judgement

As always, rules apply only in the absence of any overriding reason to ignore them. Breaking them should be in mutual agreement between the writer of the code and the reviewer. (This system only works if everyone agrees about what the rules are in the first place, though.)

Example reason to break this rule: If no agreement can be reached, then the default is to follow the coding standards.

Review code

Code gets reviewed by a developer who didn’t write any part of it, because of assumptions #1, #2, and #3 — and to spread familiarity with different parts of the codebase throughout the team. Develop with ease of reading in mind, as if you are writing a letter to an unfamiliar code reviewer. Review code skeptically and with full attention, as if it came from a malicious agent out to erase your hard drive.

Example reason to break this rule: You are committing a trivial fix for a broken build and your continuous integration system acts as the code reviewer.

Observe the style

Code follows the coding style. Coding style is important because when code looks the same it’s quicker to read and errors jump out more easily. Apply automated tools when possible to save the code reviewer from becoming a parenthesis counter.

Example reason to break this rule: The code reviewer agrees with you that deviating from the style is more readable.

Test your code

Code needs automated tests. The rationale for this is assumptions #2 and #5, but could be the subject of an entire blog post itself. Lack of tests can be by itself a reason to fail code review, or at least start a dialogue between developer and reviewer about why tests are not necessary in this particular case.

Example reasons to break this rule: A one-off script. A component that proxies an external resource which can’t easily be mocked out.

Refactor on write

You will always have to deal with legacy code (code on which development has ceased but still must be maintained) and rushed code (code which you were forced by circumstances to check in that didn’t quite work well, works but is difficult to maintain, or is not tested.) By assumption #6, you will probably never set aside time to refactor code for its own sake. Therefore, refactor bit by bit to leave the code in a slightly better state each time it’s touched. In this way, code receives refactoring attention roughly proportional to the benefit you receive from refactoring it. If at all possible, add new code with a unit test even if the rest of the code is not written in a testable way.

Example reasons to break this rule: The code is already in good shape. The feature is critical and cannot be delayed. You are contributing your code to an open source project, in which case it is better to work with the upstream community to refactor.

Refactor only on write

Make your diffs per commit no larger than they have to be, in order to make code review easier. Since diffs go line-by-line, do not fix style errors in lines that that are not already being touched in the same commit. Use separate commits if there is an opportunity to make other style fixes.

Example reason to break this rule: If it makes more sense to fix lines other than the ones being edited in one shot (e.g. large sections with wrong indentation), do so throughout the whole file in a separate commit.

Pay down technical debt

Sometimes it’s not possible to build a feature without doing a large refactor first. Determine this as early as possible and include it in the time estimate for the feature. Do not shy away from paying down this debt; it will only compound if you borrow more on top of it. However, keep the changes incremental, and the functionality unimpeded while making these changes.

Example reason to break this rule: Extreme time constraints force you to take out a second mortgage on the code (even then, do this only with a healthy dose of disgust.)

Moving text messages between Android phones

I recently got a new Android phone secondhand, and after resetting it I wanted to move the text message archive over from my old phone. It turns out that you can do this easily if you have root access. Well, technically you can do anything easily if you have root access, but the trick is knowing how. I hope that by putting this out on the internet, other people will be able to know how too.

I had root access on both phones, as they were flashed with CyanogenMod. The new phone is a Nexus 4, and the old phone is an HTC G1 (Android 2.2 is the highest that could run on it.)

On both (and as far as I know, all) versions of Android, all the text messages are stored in this file, which you need root access to read:

/data/data/com.android.providers.telephony/databases/mmssms.db

Getting the file off the G1 was easy; I entered the Terminal Emulator app (I think it’s installed automatically when you flash CyanogenMod) and copied the file to the SD card:

su
cp /data/data/com.android.providers.telephony/databases/mmssms.db /sdcard/

(su requests superuser permissions, which you have to grant.) Then I connected the G1 to my computer with its USB cable and transferred the file off of it.

Getting the file onto the Nexus 4 was harder. What I did not know is that the Nexus 4 can’t mount its SD card as USB Mass Storage (see the explanation), so I ended up using my Apple laptop to do the transfer, and had to download a program called Android File Transfer. Still, I got it onto the phone’s SD card.

Since the newer version of Cyanogenmod comes with a file manager app, I decided to use that to put the file into the correct place, instead of Terminal Emulator (typing shell commands on a phone is no joke.) The file manager is set to “Safe mode” by default which means it won’t request root access. I changed it to “Prompt User mode” in the settings, then navigated to the above databases/ folder and made a backup copy of the old (empty? 100 KB? It’s a sqlite DB so maybe there are still deleted records in there, but I don’t care to check) database. Then I copied the G1’s mmssms.db file over top of it. Unlike on the G1, there was also a mmssms.db-journal file there, which I hoped wouldn’t mess with things…

I couldn’t see my text messages after going into the messaging app, but after rebooting the phone, they were there.

Geek tip: Malloc debugging on OSX

I’ve been trying to chase down an annoying bug that I suspected to be a case of using uninitialized memory. The problem is, it only shows up about 1 in 30 times (I was lucky to notice it in the first place), and never in a debugger.

Fortunately I found that there’s a library on OSX that tweaks malloc() to help you debug:

DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib MallocPreScribble=1 ./myprogram

Or, to do this in LLDB, since due to System Integrity Protection, your linker-affecting environment variables get wiped when you execute a system program:

lldb -- ./myprogram
env DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib
env MallocPreScribble=1
run

This triggered the bug every time, both inside and outside the debugger.

For more information about what you can do with libgmalloc, see this documentation. It only tells how to use that facility in Xcode, though, so the above instructions should help if you’re on the command line.

Wave at the camera

You have probably seen the fake advertisement for Wave, the new way of charging your iOS 8 phone in any standard household microwave. (Although I would venture that some of the responses with fried microwaves and phones are hoaxes as well.)

I admit I did giggle when I first read it — some chump microwaved their expensive phone and blew it up, funny, right? Only I realized that it’s not funny at all.

Why shouldn’t people believe that a new technology would allow them to charge their phone by microwaving it? It’s no more or less magical than any other new technology being invented every day. It just happens not to have been invented yet.

Yes, people need to think critically, check sources, use common sense, and become less science-illiterate. Is microwaving your phone a smart thing to do? No. Could the average person probably have known better? Yes. But if you are lucky enough to be in the minority for whom this is obvious, you don’t have any right to laugh at those for whom it is not.

Nature, why?!

Scientific journals charge subscription fees in order to access their content. If you’re an employed scientist, the university or company where you work usually buys an institution-wide subscription to a journal. In that case you don’t have to log in to the journal website because it recognizes your IP address as belonging to a subscribing institution. In fact, you don’t even get an account on the journal website, because it’s impractical to issue an account to every single user at a university, for example.

So what do you do when you have to look up something when you’re away from your office? You use SSH with port forwarding to connect to work, then visit the website using a proxy server on that port. Since you are now browsing through a work computer, you can read the journal. There’s nothing wrong with this, because your employer has already paid for your access that content, but the barrier was simply the impracticality of issuing you an individual account.

So it’s really strange that Nature Publishing Group, which publishes the overrated Nature family of journals, seems to want to discourage this practice. If you visit the site of a Nature journal from a non-subscriber IP address, they set a cookie in your browser that says you are not a subscriber. So even when you turn on your proxy server and revisit the site, it still tells you you’re not a subscriber and can’t access the journal article. Luckily, it is easily remedied by erasing your browser’s cookies. (Easily done, that is, but not easily thought of. Hope this helps someone.)

Why, Nature, why? Why would you do this? Do you have scientists’ best interests at heart and you want to prevent them from working at home? Or do you hope that people are gullible enough to pay twice for the same content?

Discretization, Part II

In this post I described how I encountered the Sell Your Science contest and was entirely fed up with how they perpetuate the myth that scientists are a bunch of timewasters and that marketable research is the only research worth doing. I wrote the organizers, Science Alliance, a letter and urged other people to do the same. Well, it took fewer letters than I expected for something to happen.

My coworker Jelmer Renema wrote them a more strongly worded e-mail than I did. Today he got a telephone call from someone from Science Alliance who wanted to talk about the e-mail. The outcome of the telephone call was that the Science Alliance employee said they didn’t mean that economic gain was the only valid reason for science; social relevance and curiosity from the public are important too. He admitted that the blurb could have been worded differently, although he claimed that there was a large group of scientists opposed to bringing research to market. No, Jelmer told him, nobody’s opposed to that — they’re opposed to the idea that marketable research is the only worthwhile research. In the end, Science Alliance promised to do better next year and Jelmer offered them his assistance in matters of science communication.

By coincidence, an interview appeared in the Delft University newspaper this week. Professor Piet Borst, former scientific director of the Dutch Cancer Institute, says that the whole ‘valorization’ business has gone too far and gets quite angry about it (translation mine):

“We are going about this in such an absurd way. There’s really no other way to put it. [The ministry of] Economic Affairs is living in the 1970s, they think like this: ‘Those wretched university researchers and other academics, busy only with their own hamfisted hobbies, we have to force them to do useful work, and we can only do that by making them dependent on industry financing. They need guidance from our watchful industrialists over what they do.’ They’re delusional. It’s a recipe for how to do it wrong.”

Note that this man isn’t one of those mythical ‘hermit scientists’ either: he says in the interview that those who do research with public money have a duty to allow their findings to be turned into products, which create jobs.

One other important point that Borst makes is that if you, as a researcher, have a significant stake in a spinoff company, then can you really be trusted to publish findings that will cause your shares to plummet? As the interviewer says in the article, “The answer is obvious once you’ve asked the question.”