Archive for category General

Ad-hoc polymorphism in Kotlin

Even though Kotlin doesn’t natively support ad-hoc polymorphism today, it’s actually pretty straightforward to use it with little effort. Doing so is not as straightforward as it is in Haskell, obviously, but it’s been simple enough that I haven’t really encountered situations in Kotlin where the lack of native support in the language was a showstopper. In this article, I present two techniques you can use to leverage ad-hoc polymorphism in Kotlin.

Extending for “fun” and some profit

Here is what the Haskell wiki has to say about ad-hoc polymorphism:

Despite the similarity of the name, Haskell’s type classes are quite different from the classes of most object-oriented languages. They have more in common with interfaces, in that they specify a series of methods or values by their type signature, to be implemented by an instance declaration.

Before we get into details, let’s define exactly what we are trying to do and why we’re trying to do it. One intuitive way of looking at ad-hoc polymorphism is that it enables us to retroactively make values conform to a certain type. This is a bit abstract but I’m pretty sure you have encountered this problem many times before even if you never realized it. Let’s look at a simple example.

A library typically defines types and then offers functions that take parameters and returns values of that type. The only way to make use of that library is to find a way to bring your objects into that library’s object world, or in other words, be able to convert your objects to objects that this library expects and vice versa. You will find many examples of type classes if you do a simple search: values that behave like a Number, Functor, Applicative, Monad, Monoid, Equality, Comparability, etc… In order not to repeat what’s already out there and in an effort to remain focused on concrete problems, I’m going to pick a different field: JSON.

The need to parse JSON and also convert your objects to JSON is pretty much universal, so in all likeliness, you are already using a JSON library in your code. And at some point, you have had to ask yourself a very simple question: “How do I convert my current objects to JSON so I can use this library”. The library probably defines some kind of JsonObject type and most of its API is defined in terms of this type, either with functions accepting parameters of that type or returning such values:

interface JsonObject {
    fun toJson() : String
}

In order to leverage this library, converting your objects so they conform to this interface is very important, and once you have converted your objects, you gain full access to all the functionalities that this library offers, such as pretty printing in JSON, doing search/replaces in JSON, reshaping JSON object from one form to another, etc…

If you own (i.e. you are the author of) these classes, doing so is very easy. For example, you can modify your class to implement that interface:

class Account : JsonObject {
    fun override toJson() : String { ... }
}

The advantage of this approach is that you can now pass all your Account values directly to functions of the JSON libraries that accept a JsonObject. This is very useful, but the downside is that you have now polluted your class with a concern that makes your design more bloated. If you are going to extend this approach to other types, very soon, your Account class will extend multiple interfaces filling various functionalities, and you have now tied your business logic to a lot of dependencies (i.e. you now need this JSON library in order to compile your Account…).

Another approach is simply to write a function that converts your business objects to JsonObjects:

fun Account.toJson() : JsonObject { ... }

This defines an extension function that adds the method toJson() directly on the Account class, where it belongs. The extension function buys us two very important benefits:

  • Since it’s an extension function, its implementation runs on the Account instance itself (in other word, this is of type Account).
  • This function is defined without making any modification to the Account class itself. Not only does it leave your class untouched and unpolluted with unrelated concerns, you can also apply this approach to classes that you don’t own. This is extremely important and a critical step toward ad-hoc polymorphism.

With this approach, we have won separation of concerns and a great amount of flexibility but we have lost some typing power: we can no longer pass an instance of Account to a function accepting a JsonObject parameter, we need to call toJson() on that instance first.

This is how far we can go with Kotlin today and this fits in Kotlin’s general design principle to stay away from implicit conversions, a decision I’ve come to respect greatly after several years writing Kotlin code.

Escaping the tyranny of nominal typing

Let’s look at another approach to implement ad hoc polymorphism. Consider the following simple function that saves an object to a database:

fun persist(person: Person) {
    db.save(person.id, person)
}

The object is saved to the database and associated to its id. A little later, I want to persist an Account object, and in the spirit of proper software engineering, I’d rather abstract my existing code rather than writing a second persist() function. So I make my function more generic and in the process, I discover that in order to be persisted, my Account instance needs to be able to give me its id. After refactoring, my code now looks like this:

interface Id {
    val id : String
}
class Person : Id {
    override val id: String get() = "1"
}
fun persist(id: Id) { ... }

But now, I find myself having to make Account implement Id, which is exactly what I am trying to avoid with ad hoc polymorphism (either because I think it’s bad design or more simply because the class Account is not mine, so I can’t modify it). The realization here is that these type names get in the way of my goal and I’d rather keep all these concepts separate.

What if, instead, the persist() method accepted an additional parameter (a function) that allows me to obtain an Id from its parameter?

fun <T> persist(o: T, toId: (T) -> String)
// Persist a Person: easy since Person extends Id
persist(person, { person -> person.id })
// Persist an Account: need to get an id some other way
persist(account, { account -> getAnIdForAccountSomehow(account) }

This new approach has a few interesting characteristics:

  1. Notice how completely generic the persist() method has become: it doesn’t reference Person, Account and not even Id, even though it needs some sort of id in order to operate. This function is literally applicable “for all” types (I am intentionally using double quotes here, some of you will probably immediately understand what “for all” means in this context).
  2. We have detached the ability to provide an id from our types. You still have the option to implement this functionality into your types (like Person does) but it’s now entirely optional (like Account shows). This gives you a lot of flexibility since you are now longer forced to use the id supplied by the class and you can also be more creative in your testing (e.g. trying to save two different objects but force them to have the same id in order to test for collision error cases).

This approach makes a drastic step toward a more functional solution to the problem of ad hoc polymorphism: we depend less on types and more on functions. As you can see, this approach provides some interesting benefits.

An ad-hoc polymorphism proposal for Kotlin

When I reflected about this problem a while ago, it occurred to me that ad-hoc polymorphism has a lot in common with Kotlin’s extension functions: an extension function adds a function to a type outside of the definition of that type and ad-hoc polymorphism makes a type extend another type outside of the definition of that type. I came up with the concent of “Extension types” and I gave a quick overview of this idea in this article. Extension types would allow us to make types retroactively implement other types with this made up syntax:

// Not legal Kotlin
override class Account: JsonObject<Account>

The rest of the interface would be implemented with extension functions, as demonstrated in the link above. The downside of this proposal is that it adds some form of implicit conversion, something that is at odds with Kotlin’s current design, so it’s probably unlikely this proposal will go past the stage of strawman but I thought it would be interesting to draw a parallel between extension functions and extension types.

Does Kotlin really need ad-hoc polymorphism?

The more I think about it, the more convinced I am that the value offered by ad-hoc polymorphism is very closely tied to the language you’re using it in. In other words, it’s not a universal tool but one that’s heavily dependent on how well supported it is in your language. Ad-hoc polymorphism is obviously a critical component of Haskell and it has given rise to high amounts of reuse and elegant abstractions in that language but I’m not sure Kotlin would benefit as much from it.

Another important aspect of deciding how useful ad-hoc polymorphism would be in a language is whether that language supports higher kinds (type families). Without higher kinds, your ability to abstract is limited, which lessens the value of ad-hoc polymorphism significantly. And since Kotlin doesn’t support higher kinds as of this writing, the importance of native support for ad-hoc polymorphism is questionable, or at least, certainly not as high a priority as other features.

At any rate, I have used the two techniques described above in my own code bases with reasonable benefit, so I hope they will be useful to others as well.

Neural Network in Kotlin

It’s hard not to hear about machine learning and neural networks these days since the practice is being applied to an ever increasingly wide variety of problems. Neural networks can be intimidating and look downright magical to the untrained (ah!) eye, so I’m going to attempt to dispel these fears by demonstrating how these mysterious networks operate. And since there are already so many tutorials on the subject, I’m going to take a different approach and go from top to bottom.

Goal

In this first series of articles, I will start by running a very simple network on two simple problems, show you that they work and then walk through the network to explain what happened. Then I’ll backtrack to deconstruct the logic behind the network and why it works.

The neural network I’ll be using in this article is a simple one I wrote. No TensorFlow, no Torch, no Theano. Just some basic Kotlin code. The original version was about 230 lines but it’s a bit bigger now that I broke it up in separate classes and added comments. The whole project can be found on github under the temporary “nnk” name. In particular, here is the source of the neural network we’ll be using.

I will be glossing over a lot of technical terms in this introduction in order to focus on the numeric aspect but I’m hoping to be able to get into more details as we slowly peel the layers. For now, we’ll just look at the network as a black box that get fed input values and which outputs values.

The main characteristic of a neural network is that it starts completely empty but it can be taught to solve problems. We do this by feeding it values and telling it what the expected output is. We iterate over this approach many times, changing these inputs/expected parameters and as we do that, the network updates its knowledge to come up with answers that are as close to the expected answers as possible. This phase is called “training” the network. Once we think the network is trained enough, we can then feed it new values that it hasn’t seen yet and compare its answer to the one we’re expecting.

The problems

Let’s start with a very simple example: xor.

This is a trivial and fundamental binary arithmetic operation which returns 1 if the two inputs are different and 0 if they are equal. We will train the network by feeding it all four possible combinations and telling it what the expected outcome is. With the Kotlin implementation of the Neural Network, the code looks like this:

with(NeuralNetwork(inputSize = 2, hiddenSize = 2, outputSize = 1)) {
	val trainingValues = listOf(
	    NetworkData.create(listOf(0, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1), listOf(1)),
	    NetworkData.create(listOf(1, 0), listOf(1)),
	    NetworkData.create(listOf(1, 1), listOf(0)))
	train(trainingValues)
	test(trainingValues)
}

Let’s ignore the parameters given to NeuralNetwork for now and focus on the rest. Each line of NetworkData contains the inputs (each combination of 0 and 1: (0,0), (0,1), (1,0), (1,1)) and the expected output. In this example, the output is just a single value (the result of the operation) so it’s a list of one value, but networks can return an arbitrary number of outputs.

The next step is to test the network. Since there are only four different inputs here and we used them all for training, let’s just use that same list of inputs but this time, we’ll display the ouput produced by the network instead of the expected one. The result of this run is as follows:

Running neural network xor()

[0.0, 0.0] -> [0.013128957]
[0.0, 1.0] -> [0.9824073]
[1.0, 0.0] -> [0.9822749]
[1.0, 1.0] -> [-2.1314621E-4]

As you can see, these values are pretty decent for such a simple network and such a small training data set and you might rightfully wonder: is this just luck? Or did the network cheat and memorize the values we fed it while we were training it?

One way to find out is to see if we can train our network to learn something else, so let’s do that.

A harder problem

This time, we are going to teach our network to determine whether a number is odd or even. Because the implementation of the graph is pretty naïve and this is just an example, we are going to train our network with binary numbers. Also, we are going to learn a first important lesson in neural networks which is to choose your training and testing data wisely.

You probably noticed in the example above that I used the same data to train and test the network. This is not a good practice but it was necessary for xor since there are so few cases. For better results, you usually want to train your network on a certain population of the data and then test it on data that your network hasn’t seen yet. This will guarantee that you are not “overfitting” your network and also that it is able to generalize what you taught it to input values that it hasn’t seen yet. Overfitting means that your network does great on the data you trained it with but poorly on new data. When this happens, you usually want to tweak your network so that it will possibly perform less well on the training data but it will return better results for new data.

For our parity test network, let’s settle on four bits (integers 0 – 15) and we’ll train our network on about ten numbers and test it on the remaining six:

with(NeuralNetwork(inputSize = 4, hiddenSize = 2, outputSize = 1)) {
	val trainingValues = listOf(
	    NetworkData.create(listOf(0, 0, 0, 0), listOf(0)),
	    NetworkData.create(listOf(0, 0, 0, 1), listOf(1)),
	    NetworkData.create(listOf(0, 0, 1, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1, 1, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1, 1, 1), listOf(1)),
	    NetworkData.create(listOf(1, 0, 1, 0), listOf(0)),
	    NetworkData.create(listOf(1, 0, 1, 1), listOf(1)),
	    NetworkData.create(listOf(1, 1, 0, 0), listOf(0)),
	    NetworkData.create(listOf(1, 1, 0, 1), listOf(1)),
	    NetworkData.create(listOf(1, 1, 1, 0), listOf(0)),
	    NetworkData.create(listOf(1, 1, 1, 1), listOf(1))
	)
	train(trainingValues)
	val testValues = listOf(
	    NetworkData.create(listOf(0, 0, 1, 1), listOf(1)),
	    NetworkData.create(listOf(0, 1, 0, 0), listOf(0)),
	    NetworkData.create(listOf(0, 1, 0, 1), listOf(1)),
	    NetworkData.create(listOf(1, 0, 0, 0), listOf(0)),
	    NetworkData.create(listOf(1, 0, 0, 1), listOf(1))
	)
	test(testValues)
}

And here is the output of the test:

Running neural network isOdd()
[0.0, 0.0, 1.0, 1.0] -> [0.9948013]
[0.0, 1.0, 0.0, 0.0] -> [0.0019584869]
[0.0, 1.0, 0.0, 1.0] -> [0.9950419]
[1.0, 0.0, 0.0, 0.0] -> [0.0053276513]
[1.0, 0.0, 0.0, 1.0] -> [0.9947305]

Notice that the network is now outputting correct results for numbers that it hadn’t seen before, just because of the way it adapted itself to the training data it was initially fed. This gives us good confidence that the network has configured itself to classify numbers from any input values and not just the one it was trained for.

Wrapping up

I hope that this brief overview will have whetted your appetite or at least piqued your curiosity. In the next installment, I’ll dive a bit deeper into the NeuralNetwork class, explain the constructor parameters and we’ll walk through the inner working of the neural network that we created to demonstrate how it works.

Old school Apple ][ cracking

My first steps in programming probably go back about thirty-five years on the HP-38 and HP-41 but nothing will ever top the amazing times I had cracking games on the Apple ][ in the early 80s.

The Apple ][ and its DOS were extremely fertile grounds for software protection that led to some of the most fascinatingly intricate approaches to making sure that your program could not be easily copied. I’m not going to dive very deep into technical details about the Apple ][ architecture but the short version of it is that this computer let you reimplement how bytes are stored on the diskettes that you ship your software on, so needless to say that companies selling software for a living were more than happy to go ahead and do just that in attempt (mostly futile) to curb piracy.

I did a lot of cracking back in these days, mostly for the fun of it. Actually, I enjoyed getting my hands on games more for the pleasure of cracking them than actually playing them. However, one particular game resisted my attempts: “The Blade of Blackpoole”. A pretty mediocre adventure game in the style that was very popular then.

This copy protection used a lot of tricks that I just was not able to handle at the time. Remember, this was the early 80s. There was no Internet and pretty much nobody around me with enough technical knowledge of the Apple ][ to help me out. I had to figure things out on my own.

Recently, I had the crazy idea to revisit this old skeleton of mine and see if I can do better now, given all the tools and technology that the 21st century affords me. So I grabbed an image of the protected version of the game, fired a few emulators (I did this work both on Windows and Mac OS) and went to task.

It was slow at first but I was spooked to realize how much I actually remember of the Apple ][‘s internal architecture. And what I didn’t remember, the Internet happily provided for me. As it turns out, the Apple ][ cracking scene is still quite active (shout out to my inspiration for this work: a2_4am, who’s been actively cracking hundreds of Apple ][ games this past year alone]).

I carefully documented all my work cracking the Blade of Blackpoole in this document. I decided to store it in a separate file because it’s long and gruesome and goes into excruciating details about the Apple ][ and 6502 assembly. It’s not for the faint of heart, but I think you might find it interesting to follow even if you’re not completely familiar with all the technical details because it captures pretty accurately the timeless struggle between programmers who write copy protections and programmers who defeat them.

Fast forward to 2016. Copy protection is more alive than ever and the producer side seems to have struck a very serious blow to the cracking scene with Denuvo, a system that is proving extremely hard to crack and, to everyone’s surprise, which is actually an anti-temper mechanism and not an anti-piracy technology. There is so much to say about this that I’ll probable save it for another post, but in the meantime, I hope you enjoy my old school cracking report.

The Kobalt diaries: annotation processing

I recently added apt support to Kobalt, which is a requirement for a build system these days, and something interesting happened.

First of all, the feature itself in Kobalt: pretty straightforward. The apt plug-in adds a new dependency directive similar to compile:

dependencies {
    apt("com.google.dagger:dagger:2.0.2")
}

The processing tool can be further configured (output directory, arguments, etc…) with a separate apt directive:

apt {
    outputDir = "generated/sources/apt"
}

In order to test this new feature, I decided to implement a simple annotation processor project and I went for a Version class generator. As I wrote this processor, I realized that it was actually something I could definitely use in my other projects.

Of course, you can always simply hard code the version number of your application in a source file but that version number is typically something that’s useful outside of your code: you might need it in your build file, or when you generate your artifacts, or maybe other projects need to refer to it. Therefore, it often makes sense to isolate that version number in a property file and have every entity that needs it read it from that property file.

This is how version-processor was born. It’s pretty simple really: all you need to do is annotate one of your classes with @Version and a GeneratedVersion.java file is created, which you can then refer to. That version number can either be hardcoded or specified in a properties file. Head over to the project’s main page for the details.

And of course, it’s built with Kobalt and if you are curious, here is the processor’s build file:

val processor = javaProject {
    name = "version-processor"
    group = "com.beust"
    artifactId = name
    version = "0.2"
    directory = "processor"
    assemble {
        mavenJars {}
    }
    jcenter {
        publish = true
    }
}

Happy version generating!

The full series of articles on Kobalt can be found here.

TensorFlow’s rough exterior

Like many others, I have paid very close attention to Google’s TensorFlow announcement and I’m planning to invest a decent amount of time to dive into it and understand it but watching Jeff Dean’s video about it, I couldn’t help but take notice of one of the code samples that he shows:

graph = tf.Graph()
with graph.AsDefault():
  examples = tf.constant(train_dataset)
  labels = tf.constants(train_labels)
  W = tf.Variables(tf.truncated_normal(rows*cols, num_labels]))
  b = tf.Variables(tf.zeros([num_labels]))
  logits = tf.mat_mul(examples, W) + b
  loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))

What a mess…

I realize this is just one of the two front ends (Python, the other being in C++) but the syntactic conventions of the snippet above are all over the map.

I see capitalized functions (Graph()) when most of the functions are lowercased. Capital variables (W) and lowercase ones (b), both of which the result of the same function. Functions using underscores and others using capitalized camel case. There just doesn’t seem to be any rhyme nor reason to the conventions.

The only style that’s not represented in this short snippet is straight camel case.

This hurts my eyes. Hopefully, spending some time with this fascinating tool will demystify it somewhat. Or maybe it will motivate me to write a front end I feel more comfortable with, say in Kotlin.

The Kobalt diaries: Android

A lot of work has gone into Kobalt since I announced its alpha:

I’m plannning to post more detailed updates as things progress but today, I’d like to briefly show a major milestone: the first Android APK generated by Kobalt.

I picked the Code path intro app as a test application. I first built it with Gradle to get a feel for it and the apk was generated in about 27 seconds. Then I generated a Build.kt file with ./kobaltw --init, added a few Android related directives to reach the following (complete) build file:

import com.beust.kobalt.*
import com.beust.kobalt.plugin.android.*
import com.beust.kobalt.plugin.java.*
val p = javaProject {
    name = "intro_android_demo"
    group = "com.example"
    artifactId = name
    version = "0.1"
    dependencies {
        compile("com.android.support:support-v4:aar:23.1",
                file("app/libs/android-async-http-1.4.3".jar))
    }
    sourceDirectories {
        listOf(path("app/src/main/java"))
    }
    android {
        applicationId = name
        buildToolsVersion = "21.1.2"
    }
}

Then I launched the build with ./kobaltw assemble, and…

Less than five seconds to generate R.java, compile it, compile the code, run aapt, generate classes.dex and finally, generated the apk. If you are curious, you can check out the full log.

Admittedly, Kobalt doesn’t yet handle build types and flavors nor manifest merging, but the example app I’m building here doesn’t use those either so I don’t expect the build time to increase much. There is a lot more to be done before Kobalt’s Android plug-in is ready for more users, but this is a pretty encouraging result.

Google Fi unboxing



I received my order of Google Fi, the package contained more than I expected.



The business card size item at the bottom is the SD card. The rest is:

  • A portable charger.
  • A case for your Nexus 6.
  • A headset.



The charger has two USB ports and a micro one. Apparently, you can charge it from any of these ports (very convenient) and then you can plug two phones at the same time (probably three if you can find a dual micro-USB cable).



Finally, the headset has something that’s hard to find in headsets in general: volume control. It also has an extra jack, so you can plug another heaset in it. The only downside of this headset is that the control block dangles on your cheek instead of being located much lower on the cable. I don’t understand why such headsets are still manufactured.

I haven’t tested the service yet, I’ll report back after I’ve had a chance to use it thoroughly.

The long and arduous road to JCenter and Maven bliss

TestNG is available on both Maven Central and JCenter and I used to publish the artifact in these two repos with Maven. Recently, I took some time trying to obtain the same result with Gradle and so far, it has been a very painful and agonizing experience because there are so many moving parts to the whole process:

  • Gradle itself.
  • Using the right plugins, then configuring them properly.
  • Understanding the intricacies of JCenter/Bintray publishing.
  • Too much incorrect information out there. There are a lot of tutorials available on the Internet, way too many actually, especially since a lot of them are out of date.
  • IDEA is offering close to no help while editing your Gradle file: no auto completion, claiming your file has errors when it doesn’t, not complaining about broken files, etc…

My goal getting into this operation was simple and, so I thought, reasonable: being able to publish snapshots and releases from the command line to both JCenter and Maven. So far, my conclusion is that there is really no simpleem> way to achieve this goal. There is a complicated way, which I describe below, but even that complicated way doesn’t quite achieve my goal. In the end, I’m getting close to that goal except that I’m still going manually through the Nexus UI to close and deploy the artifact to Maven Central. I’ll post an update if I ever solve this.

The final structure of the build layout looks like this:

  • build.gradle: main build file, which at the end includes
  • gradle/publishing.gradle: sets up publishing routines and values that are shared by both Maven Central and JCenter. At the end, this script includes
  • gradle/publishing-maven.gradle and gradle/publishing-jcenter.gradle, which include the respective plugins and perform the publishing.

What follows is not a tutorial (there are so many alrady) but instead, a series of errors that I encountered along the way and how I fixed them.

401 when uploading to bintray

Verify your credentials. One way of setting them is:

bintray {
    user = properties.getProperty("bintray.user")
    key = properties.getProperty("bintray.apikey")

I put these values in local.properties, which I load explicitly (gradle.properties might be a better location):

bintray.user=...
bintray.apikey=...

Javadocs are not being published

Maven Central will reject artifacts that don’t contain Javadocs and these are typically not included by default:

task javadocJar(type: Jar, dependsOn: javadoc) {
    classifier = 'javadoc'
    from 'build/docs/javadoc'
}
task sourcesJar(type: Jar) {
    from sourceSets.main.allSource
    classifier = 'sources'
}
artifacts {
    archives jar
    archives javadocJar
    archives sourcesJar
}

Javadocs are not being uploaded

One line to add to your bintray configuration:

bintray {
    // Without this, javadocs don't get uploaded
    configurations = ['archives']
}

“Cannot create task of type ‘Jar’ as it does not implement the Task interface.”

At some point during my tribulations, I started encountering this mystifying error. I ended up realizing that IntelliJ had sneakily added an import org.apache.tools.ant.taskdefs.Jar at the top of my gradle file, which is obviously not the class that we want. Removing this import fixed the problem (and you might want to configure IDEA to exclude it from your imports to avoid this problem in the future).

Artifacts are not being signed

Add the following to your signing configuration:

apply plugin: 'signing'
signing {
    required { gradle.taskGraph.hasTask("bintrayUpload") }
    sign configurations.archives
}

Then in your gradle.properties (not local.properties):

signing.keyId=...
signing.password=...
signing.secretKeyRingFile=(path to .gnupg/secring.gpg)

.asc files are not being generated

Another requirement from Maven Central, which you fix in the bintray configuration:

bintray {
    pkg {
        version {
            gpg {
                // Without this, .asc files don't get generated
                sign = true
            }

“Return code is: 400, ReasonPhrase: Bad Request”

In my initial attempts, ./gradlew publish would fail with this error, probably one of the most frustrating things about Sonatype: the HTTP errors are completely opaque and they don’t give you any detail on why they failed, while they easily could. Here is a list of potential reasons for this 400:

  • user credentials are wrong
  • url to server is wrong
  • user does not have access to the deployment repository
  • user does not have access to the specific repository target
  • artifact is already deployed with that version if it is a release (not -SNAPSHOT version)
  • the repository is not suitable for deployment of the respective artifact (e.g. release repo for snapshot version, proxy repo or group instead of a hosted repository)

Just to list a few. While the server won’t give more details is beyond me and one of the main reasons why I wish I could stop dealing with Sonatype completely.

./gradlew uploadArchives” failing with mysterious HTTP errors

Another mistake I initially made was to try to upload the artifacts directly to the Maven Central repo instead of Sonatype’s Nexus staging host. The correct configuration is:

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "https://oss.sonatype.org/service/local/staging/deploy/maven2") {
                authentication(userName: System.getenv('SONATYPE_USER'), password: System.getenv('SONATYPE_PASSWORD'))
            }
            snapshotRepository(url: "https://oss.sonatype.org/content/repositories/snapshots") {
                authentication(userName: System.getenv('SONATYPE_USER'), password: System.getenv('SONATYPE_PASSWORD'))
            }

As I said at the top of this article, you then need to go deploy the archive manually from the Nexus UI.

I can’t find an answer to my Gradle problem!

Here is a life pro tip: whenever you do Google searches about Gradle, restrict the results to only the last year and read the StackOverlow answers first. Anything published before is pretty much guaranteed to be out of date.

Sonatype documentation is terrible.

Yes, yes it is. For example, the first hit to learn how to deploy to Maven Central from Gradle will land you here. This article is actually an indirection to the “real” article here which is… 404. The next link is also a 404.

Extremely frustrating.

Next steps

My current configuration enables the following process:

  • ./gradlew bintrayUpload uploads the release to JCenter. It will fail if the version is a SNAPSHOT (intentional since I uploads the snapshots to Maven’s snapshot repo, this part is straightforward and fully automated). If you want to publish snapshots to JCenter as well, you can do this by publishing to JFrog, although my attempts in that direction have never succeeded.
  • ./gradlew uploadArchives will upload the snapshot to Maven’s snapshots repo and the release to Sonatype’s staging repo. This is decided automatically based on whether the version name contains the string “SNAPSHOT”.

The build files themselves add up to more than 300 lines, which is mind boggling to perform operations that should be close to standard. Gradle is certainly very far from having sensible defaults.

I’m hoping to eventually be able to fully deploy to Maven Central from the command line but I’m not sure it’s possible, so suggestions are welcome.

Easily inspect your SQLite database on Android

Here is a script I use very often for Android development: this small shell script will copy the database from your device on your file system and then launch SQLiteBrowser on it, allowing you to inspect your tables very quickly. I’ve found this script extremely useful, going sometimes as far as calling it multiple times while my code in on breakpoints in my IDE.

This script takes additional steps before pulling the database, such as changing a few file permissions, since I have noticed that some devices are more strict about allowing database pulling than others. As far as I can tell, this script has worked on every single device I’ve used so far.

#
# pull-db
# Inspect the database from your device
# Cedric Beust
#
PKG=com.beust.example
DB=my.db
adb shell "run-as $PKG chmod 755 /data/data/$PKG/databases"
adb shell "run-as $PKG chmod 666 /data/data/$PKG/databases/$DB"
adb shell "rm /sdcard/$DB"
adb shell "cp /data/data/$PKG/databases/$DB /sdcard/$DB"
rm -f /tmp/${DB}
adb pull /sdcard/${DB} /tmp/${DB}
open /Applications/sqlitebrowser.app /tmp/${DB}

Game design and game implementation

“Because of a bug (it is an off-by-one error) the parley can only work if enemy and party member *do not* speak the same language.”

One of the many memorable quotes from a fascinating article on an old school RPG by one of the developers. I don’t even know this particular video game, even though I was playing a lot of RPG during my Apple ][ and Amiga days, but this particular quote resonates with me because it ties game design and game implementation together in a very explicit way.

The whole article is a must read for anyone who’s interested in game development or game design, or both.