Compile v8 arm, arm64, ia32

Quick guide on how to compile v8 8.4-lkgr for Android on Ubuntu in one copy/paste script. I use this script regularly, so it is well tested.

If you are looking for some already pre-compiled ready-to-use v8 version, you can find them here.

# Install git:
sudo apt install git

# Install depot tools
Follow instructions

# Fetch v8 source code.
# Use the branch of your choice. In will use 8.4-lkgr (last known good release).
# I'd advise always using -lkgr branches
fetch v8
cd v8
git pull
git checkout 8.4-lkgr

# Enter v8 folder.
cd v8

# Install all dependencies, ndk, sdk, etc.
# This may take a while. It downloads android tools: sdk+ndk, etc.

# Set android target. This op will take some time, ndk alone is +1Gb
echo "target_os = ['android']" >> ../.gclient && gclient sync

# Generate compilation target: 
# Change android_arm.release to the folder name of your choice, in this
# case:
# Use this to compile for arm/arm64
tools/dev/ arm.release

# Use this to compile for x86
tools/dev/ ia32.release

# Edit gn configuration file: 
# I’d recommend disabling icu support, and set 
# symbol_level=0 for faster compilation and thinner
# output libs. You can get the whole list of 
# compilation options by executing: 
# `gn args —-list` 
# Optionally set `target_cpu="arm64"` or `target_cpu="x86"` (if ia32 was used)


# This is my file contents:
android_unstripped_runtime_outputs = false
is_component_build = false
is_debug = false
symbol_level = 1
target_cpu = "arm"
target_os = "android"
use_goma = false
use_custom_libcxx = false
use_custom_libcxx_for_host = false
v8_target_cpu = "arm"
v8_use_external_startup_data = false
v8_enable_i18n_support= false
v8_android_log_stdout = true
v8_static_library = true
v8_monolithic = true
v8_enable_pointer_compression = false

# to compile arm64, just change target_cpu and v8_target_cpu to arm64

# Compile target: 
# This may take up to 1 hour depending on your setup.
# Optionally use a -j value suitable for your system.
ninja -C v8_monolithic

# Fat lib file has been generated by v8_monolithic parameter at
#<e.g. android_arm.release>/obj/libv8_monolithic.a 

# source headers, for inspector compilation.
mkdir -p src/base/platform
mkdir -p src/common
mkdir -p src/inspector
mkdir -p src/json
mkdir -p src/utils
mkdir -p src/init

cp -R ../../../../src/common/*.h ./src/common
cp -R ../../../../src/base/*.h ./src/base
cp -R ../../../../src/base/platform/*.h ./src/base/platform
cp -R ../../../../src/inspector/*.h ./src/inspector
cp -R ../../../../src/json/*.h ./src/json
cp -R ../../../../src/utils/*.h ./src/utils
cp -R ../../../../src/init/*.h ./src/init

# copy v8 compilation header files:
cp -R ../../../../include ./

# For compilation on Android, always use the same ndk as 
# `gclient sync` downloaded. 
# Enjoy v8 embedded in an Android app

Compile for Android emulator

tools/dev/ gen ia32.release
# edit, to contain the following:
is_debug = false
target_cpu = "x86"
use_goma = false
target_os = "android"
v8_use_external_startup_data = false
v8_enable_i18n_support = false
v8_monolithic = true

Annotation processing basics

Annotation processing has been around since Java 1.5 was released in 2006. Runtime reflection always suited my needs but I’ve recently been much into low level performance, so I started tinkering with it.

Though there exist great processors like Butterknife or Dagger/2, I always try to deeply understand implementation details and so I created a very simple annotation processor example. It does less than 1% of what butterknife is able to do, but it is a good starting point for understanding how automatic annotation-based java source code generation works.

One thing to make clear from the very beginning is that a processor, does not change an existing class source code. It only can write new source code files. This is important to better think about what kind of code will be generated, and keep in mind for example, scope access from automatically generated to existing code. Other interesting fact is that a processor recognizes annotations in compiled code as well as in plain source code.

In this example, I will try to recreate @BindView from Butterknife. The example is able to automatically bind all attributes in a Class object annotated with @BindView . The example generates code of the form v= getViewById(id) for each annotated field, and binds them just by calling Binder.Bind(this). E.g:

public class MainActivity extends AppCompatActivity {

    // public/package access so that binder object can set
    // value directly.
    // otherwise it should set the value reflectively.
    @BindView(id =
    TextView text;
    @BindView(id =
    EditText text22;

    protected void onCreate(Bundle savedInstanceState) {

        // solve and bind annotations to views.

        text.setText("APT worked!!");

To make this work, I created a gradle project with three different modules:

  1. Annotations module, which contains the annotations that the processor will handle. Here also resides the example Binder object. This module is referenced by the application, and the processor modules.
  2. Processor module. Here is where the annotations will be processed and new source code will be generated. This module needs a reference to the Annotations module for obvious reasons. Important to note is that the annotation processing is a whole java application, and runs in an independent JVM. Here you can include desired dependencies because ultimately, you are running a whole Java app. An application can have many different Annotation processors, each of which will run on its own JVM.
    One final thought about the example processor is that it mostly is aimed at annotated java source code, and not compiled classes.
  3. Application module. This module references the Annotations module for runtime, but uses the Processor module for compilation and code generation purposes. The processor is used as an apt dependency.
    The application contains a couple classes with attributes annotated with @BindView for testing purposes.

Annotations module

This is a very simple module which defines just one annotation to be processed: @BindView . It can be applied only to class fields, and it will be retained on the class file (RetentionPolicy.CLASS), not needing to be available at runtime (RententionPolicy.RUNTIME). Here’s its definition:

public @interface BindView {
    public int id();

The other object in this module is the Binder object. Which is invoked like Binder.Bind(this) to generate source code to bind all annotated view types with its view resource id. How it works is easy. For each object passed to Bind method, the Binder expects the annotation processor to have created a class of the form: object.getClass().getCanonicalName() + "$$Binder" . The Binder is something like:

public static final void Bind( Object obj ) {
    String str = obj.getClass().getCanonicalName();
    String clazz = str + "$$Binder";

    try {
        Class _c = Class.forName( clazz );
        IBinder binder = (IBinder)_c.newInstance();
        // call generated code
        binder.bind( obj );  
    } catch( Exception x ) {

        // no binder class exists.

Generated $$Binder classes implement a bind method. This method is generated by the processor, and performs the binding for all @BindView annotated objects.

Annotation Processor

This is where things get actually really interesting. An annotation processor needs to implement javax.annotation.processing.Processor. Usually by extending javax.annotation.processing.AbstractProcessor.

The first thing to note in our processor’s code is the following annotation to the processor class itself:

public class Processor extends AbstractProcessor {

This, is in fact another annotation processor directive. If needed, it creates a file: META-INF/services/javax.annotation.processing.Processor with our annotation processor full qualified class name inside. In our case: com.hyper.processor.Processor. This annotation must be added with a module dependency to'.
Yes, it is an annotation processor invoked from our annotation processor. Isn’t it simply nice ?

To make things quick, we just need to declare what annotations the processor will act upon. Describe annotations’ fully qualified class names as strings:

public Set<String> getSupportedAnnotationTypes() {
    // we'll accept just one annotation.
    HashSet<String> ret = new HashSet<String>();
    return ret;

tell what source version the processor accepts:

public SourceVersion getSupportedSourceVersion() {
    // accept latest version possible   
    return SourceVersion.latestSupported();

and override the process method, which is where the actual magic happens. The naive implementation for this example is:

This processor just expects fields annotated with @BindView. Keeps track of all the annotated fields per class, and then generates a file for each Class with annotated fields. For each Class, a file of the form Class.getCanonicalName() + “$$Binder” is created. This is a convention, so that at runtime, this class can be reflectively instantiated and all annotation-bound views solved.

The collection of identified annotated elements, with the specified annotations set defined in getSupportedAnnotationTypes are of type Element. Not Class. We are in the land of meta programming, building code blocks on the fly, and we get information directly either from the java compiler, or the java bytecode. There is an Element for each type of code annotated element. An element can be a class, attribute, interface, etc. In our example, we annotate fields, so VariableElement is the type of each annotated element passed in to the processor’s process method.

I created a helper class to deal with VariableElements, and to be able to recognize its class type, variable name, value, etc. The annotation is received as a javax.lang.model.element.AnnotationMirror object, so obtaining annotation values is easy, see the constructor of AnnotationInfoView:

Note that annotated element’s name, type, etc. is obtained from the VariableElement object as string types. From here, you could find the class, and reflect if needed.

The processor final piece, is to write the generated java files. I extended AbstractProcessor which upon a call to its init method, it saves a reference to a ProcessingEnvironment object. This object has environment specific object, like Messager which allows to print messages to the console while the processor is working, or Filer, which allows to create a JavaFileObject to which write the generated java code. Writing the generated java code is just a matter of creating a Writer like:

String fname =class_element + "$$Binder";
JavaFileObject jfo = processingEnv
Writer writer = jfo.openWriter();

and write to it with a simple java Writer. There are wonderful tools to aid you in code generation.

Example project structure

The example is an Android Studio gradle project. For this to work properly, the Application gradle file must include the following on the buildScript/dependencies which will bring annotation processing tool plugin to Android Studio:

buildscript {
    repositories {
    dependencies {
        classpath 'com.neenbedankt.gradle.plugins:android-apt:1.4'

The Application module (where the processor is going to execute), must include these in its gradle file:

apply plugin: ''
apply plugin: ''

android {

dependencies {
    compile fileTree(include: ['*.jar'], dir: 'libs')
    compile ''
    compile project(':annotation')  // include annotations module
    apt project(':processor')       // run apt

The processor module, must include the following in its dependencies:

dependencies {
    compile fileTree(include: ['*.jar'], dir: 'libs')
    compile ''
    compile project(':annotation')  // include annotations module

Example source code

There’s lots of pending stuff about annotation processing. This is a bare minimum example which does one simple thing, basic enough so to to expose processor core ideas, and have a minimum already setup project to tinker with:

Software development empirical rules

All your codebase will totally rewritten from the ground-up several times during the project lifecycle. You can write good code, several times faster than the best of your code. Write good code, not the best.

Each codebase rewrite will eventually contain more bugs than the previous one.

Sometimes the only way of moving forward is to throw everything out, and start from scratch.

When talking about performance or improvements, everything not based on metrics can be considered just an opinion.

Designing software is an iterative process. The number of iterations is always one more than the number of iterations performed at any given time.

There is never a single right solution. On the contrary, there are always many wrong solutions. A dev should know where he/she lies in.

Software is built from requirements. It is worthless to develop something not asked for. Corollary: it will be worthless as well to try to developing to outsmart the requirements.

From the required use cases, only a small percentage will be used. Go figure the usage of not requested use cases.

100% secure software will take an infinite amount of time and effort. Make your development ready for failure.

Time is the only shrinking resource.

There are no important stakeholders watching during the moment when the project works like a charm.

Blaming does not make your software better.

Being right is worthless. You can in fact take my gist, and suddenly be twice as right as before !


Is my implementation of a deterministic finite state machine framework with support for: declarative definitionnested statesinternal/external transitionsguardsasynchronous executionserialisation, etc.


Switch/Case blocks, or even worse, logic stored in multiple variables are a poor design choice and a source of uncontrolled conditions. Automata brings in logic control by managing your system’s state complexity automatically.

The idea is simple: Automata enforces code organisation by convention, and handles the logic behind state change in a simple event-based asynchronous protocol. The result is deterministically predictable execution of code for the same starting conditions. Or put it another way: reach the same bugs for the same initial conditions and sequence of events.

Github repo:

Example how-to:

// define an automata.
const json: FSMCollectionJson = [
    name : “Test”, // FSM name
    state : [“a”,”b”,”c”],
    initial: “a”,
    transition : [
        event : “ab”,
        from : “a”,
        to : “b”,
        event : “bc”,
        from : “b”,
        to : “c”,

// register automata definition for later reference.

// get a session for a given automata.
const session = FSMRegistry.SessionFor(“Test”);

// let the system handle complexity.
session.dispatch(“ab”); // change state A to B
session.dispatch(“ef”); // discard this message. State B has no ‘ef’ transition

FSM, States and transitions

In Automata, a FSM is an immutable entity, so are the states and transitions that conform it. It is a directed graph of nodes (States) connected by Transitions.

These are defined in the simplest JSON format possible:

  name    : string;   // automata name
  state   : string[]; // state names
  initial : string;   // initial state name.
  transition : {
    event : string;     // event triggering state change
    from  : string;     // from state name 
    to    : string;     // to state name
  }[]                 // array of transition

State entry/exit and Transition Actions.

When entering or exiting an State, and when a Transition is triggered, Automata calls function hooks associated to these events called Actions. For example, when a Transition from State A to State B by Event E is triggered, the following sequence of functions is called:

  • call State A exit action.
  • call Transition E action.
  • call State B enter action.

These actions are optional, and are defined in the Session client state object.


The session object has two main responsibilities:

  • it keeps track of one specific internal FSM state.
  • it keeps a reference to Client State, which links State with an arbitrary state object.

For example, we can define an FSM for a game like Word With Friends. A session will keep track of the internal State (e.g. changing_tiles), and the game state object, which keeps bound information for the board, player’s tiles, etc.

State enter/exit actions, will be functions of the form: <state_name>_enter and <state_name>_exit respectively. Transition actions will be functions of the form: <transition_name>_transition. These function are defined in the Session Client State object.

For example, a Session object for the previous FSM definition could be:

// external FSM state.
// your game state, like board, player's tiles, decks, etc.
class SessionClientState {

    numPlayers = 0;

    constructor() {}

    b_enter( ctx: StateInvocationParams<SessionLogic> ) {}

    b_exit( ctx: StateInvocationParams<SessionLogic> ) {
        // guaranteed'b'

    a_exit( ctx: StateInvocationParams<SessionLogic> ) {
    ab_transition( ctx: StateInvocationParams<SessionLogic> ) {}

    // not all States or Transition need their actions defined.
    // Automata will call only the existing ones.

// a session, binding FSM state with external state.
const session = FSMRegistry.SessionFor(
    new SessionClientState()  // attach client state 
                              // to automata state.

How you interact with the session object is simple:

    event: "ab"

// this will invoke `a_exit`, `ab_transition`, and `b_enter` 
// functions if any are defined in the SessionClientState object.
// if the current state does not recognize this message
// (defined in transitions block of the FSM),
// this dispatch has no effect.

Nested states

In Automata, FSM are States by definition. Nested State mean that a given FSM state, can refer another FSM as one of its states.
Internally, a Session object keeps an stack of states called SessionContext.

Even the most basic Session object, like the example Test FSM, will have two contexts. If at any given time a Session is in State a, the context stack would be like:

State a      // regular State
Test         // FSM State

As such, entering any FSM, triggers the following sequence of actions:

execute Test initial_Transition Action
execute Test_enter Action
execute a_enter Action

For each entered FSM, the Session will contain an additional SessionContext, thus keeping track of entered substates.

You can refer to another FSM in any FSM definition, by naming the State as @<state name>. For example:

const json: FSMCollectionJson = [
        name: "SubStateTest",
        state: ["_1", "_2", "_3"],
        initial: "_1",
        transition: [...]
        name    : "Test",
        state  : ["a","b","@SubStateTest","c"],
        initial: "a",
        transition : [...]

Exiting Hierarchically nested states

Entering hierarchies of States is easy, but exiting nested States can be misleading.

When transitioning, Automata will always try to find a valid Transition for the current state. This means that the whole stack of contexts will be checked for a valid transition.

For example, taking the previous substate stacktrace as base, to find a suitable Transition for current State _1, Automata will also check in SubStateTest state and Test4 for a valid Transition. In this sample FSM definition, assuming a session for Test4 which references another FSM as @Sub:

+- Test4 -------------------------------------------+
|                                                   |
|     +---+             +------+             +---+  |
|-->  | A |  -- ab -->  | @Sub |  -- sb -->  | B |  |
|     +---+             +------+             +---+  |
|                                                   |

+- Sub ------------------------------------------+
|                                                |
|     +---+             +---+             +---+  |
|-->  | 1 |  -- 12 -->  | 2 |  -- 23 -->  | 3 |  |
|     +---+             +---+             +---+  |
|                                                |

When trying to Transition from 2, by a message of type {event:”sb”}, automata will find a valid transition from @Sub — to → B, resulting in the following action calls:

+ state 2 exit Action
+ state Sub exit Action
+ transition sb Action
+ state B enter Action


Guard is a condition associated to a Transition which can prevent the normal flow of events triggered by the transition. They are implemented as a function in the SessionClientState object of the form:

( ctx: StateInvocationParams<SessionLogic> ) => boolean

For example, we want to have a transition from State A, to State B by Transition AB event. If the guard function returns false, the Transition is prevented, and instead of a A -> Transition -> B flow of actions, the execution flow would be: A -> Transition -> A.
This important fact is indicated in the StateInvocationParams object, by having is optional variable guarded set to true.

Local vs External transitions

FSM interaction happen primarily by calling dispatchMessage which dispatchs a message to a Session object. Each dispatched message, generates an internal messages queue, where internal messages can be queued.
When a given FSM Action needs to post a message it must use postMessage function instead.

Posted messages will be queued in the current execution unit, before dispatched messages. This way, an auto-transition can happen safely.

An Action can as well dispatchMessage at any time, but the difference is clear: dispatched messages will be queued after all previously dispatched messages w/o any guarantee of order of execution.

It is important to note that all messages, dispatched or posted, run in the context of setImmediate calls. This has important implications like the fact that dispatchMessage is fully asynchronous. This function accepts a second parameter to get notifications of when the message has been fully consumed. This is specially important when a given FSM Action, posts new
messages to be consumed in the same unit of execution.

The full dispatchMessage signature is:

    new SessionConsumeMessagePromise<SessionClientState>().then(
        (session: Session<SessionLogic>, message?: Message) => {
            // event succesfully fully consumed 
            // (all post messages included)
        (session: Session<SessionLogic>, error?: Error) => {
            // event fully consumed (all post messages included).
            // there was an error in execution.

Also note that all events sent to Automata, execute in a try/catch block. The catch error will be notified to the error function of the optional consumption execution promise.

Session serialisation

By default, a Session serializes its FSM definition, and its internal state.
There’s no way for Automata to know what parts of the ClientState are transient or how to serialise them, so it delegates this step to the ClientState developer.

If the ClientState has a method serialize, it will be invoked and its result saved next to the Session’s serialization information.

Serialization process would then just be:

const serialized_session = session.serialize()

Analogously, deserialization of a Session object needs a ClientState builder function. The call to have a fully fresh session built from a serialised object would be:

const session2 = Session.Deserialize(
    (data: any) : SessionLogic => {
        // data is the serialized client state. 
        return new SessionClientState(data);

The session serializes the FSM needed to build it, w/o polluting the FSM Registry. The idea is to be self contained, so a Session knows how to restore its internal state.

Session observers

While Session objects actions are choreographed by Automata framework, it is interesting to know about certain important Session events. The full creation of a session function call is:

    "Test4",                // a registered FSM
    new ClientState(),      // a client State object
    session_observer        // an optional session observer.

SessionObserver is of the form:

export interface SessionObserver<T> {

    // session finished. can't accept any other messages.
    finished(session: Session<T>);

    // session has fully processed the init event.
    // see Local vs External transitions.
    ready(session: Session<T>, 
          message: Message|Error, 
          isError: boolean);

    // the session changed State. 
    // Auto-transitions and guarded transitions 
    // also notify this method.  
    stateChanged(session: Session<T>, 
                 from: string, to: string, 
                 message?: Message);

FSM Registry

The Registry keeps FSM definitions and allows to create multiple sessions for the same FSM. Serialised sessions don’t add new FSM entries to the Registry.

To add new FSM definitions, you just call

Registry.Parse( FSMJson[] );

FSMJson definition is as follows:

export interface TransitionJson {
    from: string;
    to: string;
    event: string;

export interface FSMJson {
    name: string;
    state: string[];
    initial: string;
    transition: TransitionJson[];

Once registered, obtaining a session is quite simple:

// T is any object used as SessionClientState. 
Registry.SessionFor<T>(s: string, 
                       state: T, 
                       observer?: SessionObserver<T>)


const session = Registry.SessionFor( 
    "Test4",                // a registered FSM
    new ClientState()      // a client State object

    payload: {}         // extra payload received in the Action's 
                        // StateInvocationParams message object.


Going directly to the complex example. I include the FSM definition of one of my multiplayer games, a full clone of Scrabble/Word with friends type of games.

Squeezing v8 startup time

On my day to day job, v8 in present most of the time. One philosophical foundation of our product is startup time, and all time taken from startup to the first frame drawn must be lowered as much as possible.

This startup time is composed of many different steps, from GL context initialization, to script parsing, etc. Some of these steps can’t simply be avoided or its time can’t be lowered because it is system dependent (like creating an OpenGL context), but other’s can be addressed, and every millisecond on a mobile device is accounted for. While medium/high-end devices like a Nexus 5x have pretty decent execution times from startup to frame shown (<250ms), lower end ones like a Motorola XT’s are not so good.

A few approaches to lower these times are:

Use snapshot

Same code executed on v8 5.6 with or without snapshot results in dramatic v8 initialisation times improvement.
While Nexus 5x impact is relatively low: 32 vs 59milliseconds with and without snapshot, on the lower end Motorola device, the difference is 85 vs 560 milliseconds. Just v8’s initialisation time is half the time a user perception needs to flag a bad user experience.
v8 snapshots are enabled by default, and will require you to supply user-side stub code for loading general snapshots, but totally worth it.
These are numbers for just bootstrapping v8 w/o user code.
I personally include the snapshots as a natively linked library. These could also be loaded from disk, but i/o can impose its toll.

Upgrade v8 version

Compiling a new v8 version has almost halved scripts’ parse and execution times. From v8 4.9 to v8 5.6, parse/execution time of a big script (5000 js loc/400Kb) is mostly halved at no cost. From 160 to 75 ms on the Motorola, and from 60 to 35 ms in the Nexus 5x.
Bumping one simple v8 version number does the magic. It also behaves much better on memory for devices with 512Mb of memory, etc.
Unfortunately, bumping v8 version can’t keep up with lowering parse/execution times forever though.

Code load, parse and execution

Parse/execution time can also be affected by how we load code into v8. Chrome has a defer modifier for a script declaration. How this translated into v8 world is simple: v8 expect the javascript code to be streamed and will thus be incrementally parsed and then on the fly.

My tests show that for big browserified projects, I get up to a 10-20% speed up in parse/execution times for the Motorola, and for small files in high-end devices, it can be negligible (simply because the higher end device is already fast enough).

At native code level, this translates in adding more infrastructure. v8 enters a streaming mode, in which the developer must feed script contents to v8 on demand. The setup is not straightforward, and a background thread is needed to govern streaming javascript content. Here’s v8 headers for the needed setup:

 * Source code which can be streamed into V8 in pieces. 
 * It will be parsed while streaming. 
 * It can be compiled after the streaming is complete.
 * StreamedSource must be kept alive while the streaming
 * task is ran (see ScriptStreamingTask below).
class V8_EXPORT StreamedSource {
 * A streaming task which the embedder must run on a 
 * background thread to stream scripts into V8. Returned by 
 * ScriptCompiler::StartStreamingScript.
class ScriptStreamingTask {
  virtual ~ScriptStreamingTask() {}
  virtual void Run() = 0;
 * Returns a task which streams script data into V8, or NULL 
 * if the script cannot be streamed. The user is responsible 
 * for running the task on a background thread and deleting it. 
 * When ran, the task starts parsing the script, and it will
 * request data from the StreamedSource as needed. When
 * ScriptStreamingTask::Run exits, all data has been streamed
 * and the script can be compiled (see Compile below).
 * This API allows to start the streaming with as little data 
 * as possible, and the remaining data (for example, the 
 * ScriptOrigin) is passed to Compile.
static ScriptStreamingTask* StartStreamingScript(
    Isolate* isolate, StreamedSource* source,
    CompileOptions options = kNoCompileOptions);


I assume you have already minimised/obfuscated your javascript code, and that your images/textures/assets have been compressed, power vr tooled, etc. etc.

Android Java/Json converter

With runtime type reflection

I’ve been working a long time with multiplayer web-based/Android/iOS games, and one common operation on the server side or native clients is Json conversion to Java and vice versa.

For such repetitive tasks I created a runtime reflection-based system to aid me in the process. Serialization is straightforward. Just recursively visit every object’s fields and convert to Json with basic type inference. One important thing to detect though are cyclic dependencies. And the serializer definitely takes care of that. Inner class fields identification are avoided, and annotated fields as @Transient are also not serialized.

The deserializer, is something trickier. It basically maps and converts untyped Json data into Java types, which is not always possible, unless:

  • Java deserialization target objects have fully qualified field types. For example, it is not valid to set a type as List<TestA> because the deserialization type inference can’t instantiate such a type.
  • Java target object field types are of primitive types, List<?> subclasses, or other Java objects whose fields follow the same rules.
  • null is a first class citizen.
  • You can refer to inner classes to map objects.

For example, this is valid code for transformer deserializer:

JSONArray json = new JSONArray("[0,1,2,3]");
int[] a = JSONDeserializer.Deserialize( int[].class , json);

Or this other example, where a JSONArray is mapped to a ArrayList<TestA> objects:

public class TestE {
    ArrayList<TestA> arr;

JSONObject json = new JSONObject(
        "{" +
            "arr:[" +
                "{\"a\":12345,\"str\":\"1qaz\"}," +
                "{\"a\":2345,\"str\":\"2wsx\"}," +
                "{\"a\":34567,\"str\":\"3edc\"}," +
                "null" +
            "]" +
TestE a = JSONDeserializer.Deserialize( TestE.class , json);
JSONObject json2 = new JSONObject( JSONSerializer.Serialize(a) );
assert(json.equals(json2));  // true

You can find transformer implementation here:

Node.js module globals

Everyone that required a Node.js module realised that the module globals keep local to the module and don’t pollute the global namespace. From a Nodejs standpoint, this is fairly easy. Since in Javascript a function creates an scope, a module source code is then wrapped in a javascript function like:

// node/lib/internal/bootstrap_node.js:506
(function (exports, require, module, __filename, __dirname) {
    // module code here

This function uses the same javascript context as all the other modules available in node, either internal or not. It receives two variables: module and exports, which allow to export values from the module. All other module contents will be kept in the anonymous function’s closure (This function signature might make more clear why some modules use exports and some others module.exports). Worth noting module is not global, but rather local to each module.

The module also receives the absolute path to the filename and directory it was loaded from. Most of the burden of module loading would come from finding where the module actually has to be loaded from on the filesystem module.resolve() function, and getting the dependency tree module.paths.

Simple solutions to achieve great results.

Change object default toString value in Javascript

For any javascript object, e.g.:

const obj = {
  x : 3,
  y : true,
  z : function() {}

A call obj.toString() will by default yield: [object Object] . Some other objects, like the browser Window object, will yield: [object Window] if printed on dev tools.

We can force a value different than [object Object] by executing this:

    configurable: true,   // if might be changed/redefined.
    value: ‘MyCoolObj’    // put here your object description.

Now, obj will be identified as:

Tagging special objects has never been easier than this.


The native side of things would look like this:

// for an existing object or template
      v8::String::NewFromUtf8(isolate, "YOUR_STRING_CLASS_HERE"),
          v8::ReadOnly | v8::DontEnum));

// you also could just (e.g. in interface_template):

About me

This is a blog about random brain dumps on programming. Mostly things that have been useful to me over the years.

In my career I have been cofounder of a couple failed startups, manager, but above all software engineer. As such, I face daily challenges that can be for sure blueprinted, and this is the whole purpose of this blog: keep a repository of solutions so that I don’t hit the same wall twice.

Expect a lot of v8/Javascript Core embedding, Typescript, NodeJS, 2D graphics.

Feel free to ping back here or @hyperandroid.