Generic Javascript to Java bridge

When embedding v8, one of the pain points is how to call Java/Kotlin code from Javascript. It is not just a matter of setting a FunctionCallbackInfo<Value>, but also of dealing with JNI. While there are really impressive exercises for automating JNI code calls, these are only valid suitable when you know your JNI needs upfront, e.g. method signatures, calling objects/classes, etc. More concretely, when you can compile your own v8 code. In my case, all my v8 is primarily embedded in Android, and I share around an AAR file with all needed v8 dependencies. How each project depending on this AAR exposes its own Javascript bindings, is as simple as annotating Java code with @Bridge.

This post depicts how I built a dynamic bridge between Java and v8, and how methods annotated as @Bridge are automatically exposed in javascript.

For example:

// Java
class Test {
  @Bridge(returnsJSON = true)
  String method2() {
    return "{\"result\":0}";

  String method1(int a, int[] i32, double[] f64) {
    return "";

// Javascript
> Test.method2();
> { result: 0 }
> Test.method1(1, [1,2,3], new Float64Array([.1, .2, .3]));
> ""

// Javascript call with wrong Java parameter signature:
> Test.method2(1,2);
> null
> Test.method1(32);
> null

As nice as it might sound, it comes with a myriad of limitations:

  • Method disambiguation. In Java we can have method overloading, which does not exist in Javascript.
  • Javascript to Java and vice versa type conversion. Only number, string, boolean, null, typed arrays and array, or object of these types can be safely converted between environments. Arrays and Objects are defined recursively.
  • Java method signatures. In Javascript, you can call a function with an arbitrary number of parameters and types. Exposed Java functions are no exception. Only Javascript calls that match Java method signature (based on previous point types’ conversion) will be executed. By now, this is a reflection based method invocation. Slow, but convenient.
  • Java method return types. While a Java method can virtually return anything, I constrained return types to String. Optionally, this string can be JSON parsed before setting the Javascript call return type. If any error has been caught, null will be returned instead.
  • Every Javascript call will pass through a unique JNI entry point.
  • Asynchronous calls. These can not be directly modelled with this approach. But they work under other @Bridge annotation options.
  • It tends to be slow: reflection in Java + JNI bridge. Read on my APT articles on how to remove reflection calls.

Are all these limitations worth it ? it totally is for non critical code paths. In my case I use it for things like calling the speech synthesiser, open share dialogs, create gl textures… But definitely not for calling into every OpenGL function per frame for example.

Implementation details in an upcoming article version 2.

Native threads and JNI

It is well known a thread not generated from Java itself, like pthread_create on android, or a native activity, can’t find a JVM method by calling Java env’s FindClass. These method calls will return class not found (nullptr), even for prelude classes, like java.lang.String.

Turns out findClass method relies on a ClassLoader to find an specific jclass object. The solution, is simple, but subtle though.

Our native thread is not attached to any VM/Java Env and thus env->FindClass calls will fail. We need another way to find classes. And what class has a FindClass method ? Well, no other class does. But a ClassLoader exposes a loadClass method, which indirectly calls findClass and many other methods. Simply enough, we need to obtain a reference ClassLoader, being the most obvious java code to be: object.getClass().getClassLoader().

In my case, I get such class loader from the call to v8 initialisation. This method is public native void InitializeV8();. In native:

    Java_com_spellington_task_TaskRunnerInstance_InitializeV8(JNIEnv *env, jobject obj) {


        // obj if implicit this on java call to the native method.
        auto rc = env->GetObjectClass(obj);

        // obj.getClass()
        jclass clazz = env->GetObjectClass(rc);    

        // find getClassLoader method
        jmethodID getClassLoaderMethod = env->GetMethodID(clazz,

        // obtain class' ClassLoader reference.
        jobject objClassLoader = env->CallObjectMethod(rc, getClassLoaderMethod);     

        // protect this ref from GC ! This is mandatory. 
        //   objClassLoader must be stored somewhere else for later usage
        objClassLoader = env->NewGlobalRef(objClassLoader);

        // ClassLoader class
        jclass objClassLoaderClass = env->FindClass("java/lang/ClassLoader");
        // loadClass reference.
        //   Must be stored somewhere else for later usage.
        jmethodID loadClassMethod = env->GetMethodID(

This code just got a reference to a valid ClassLoader and a methodID to its loadClass method. Now we have an alternative to just calling findClass:

const char* const className = "com/spellington/HCSurfaceView";

auto clazz = env->FindClass(className);

// if findClass failed, try the ClassLoader alternative. 
// This will be true in two different scenarios:
//    + className, effectively does not exist.
//    + a native thread calls into JNI
if (clazz==nullptr) {
    // findClass alternative based on our ClassLoader
    clazz = static_cast<jclass>(

// if clazz is nullptr, there another two different scenarios:
//    + most likely, className is not a valid fully qualified class name.
//    + a native thread is trying to find a class which does not exist in the ClassLoader,
//        but might exist on another ClassLoader. This is likely if and only if your app
//        has customs ClassLoader instances. 

When using mixed Java/native thread, or just calling JNI in a multithread environment, you might want to pay attention to Java VM’s AttachCurrentThread method… But that’s another story.

Enable JavascriptCore debugging

Enabling Safari dev tools for JavascriptCore remote debugging of your contexts is much easier on iOS/OSX than on Android. See my article on how to set up remote debugging on v8/Android.

There are two requirements that must be fulfilled first though:

  • Sign your app. This happens automatically on iOS, but is optional for an OSX app.
  • Enable developer menu on Safari. Just open Safari, Preferences, and enable this highlighted-in-red check:

This is roughly 95% of the work. The other 5% is shockingly simple:

let scriptContents = try String(
    contentsOfFile: PATH_TO_YOUR_FILE, 
    encoding: String.Encoding.utf8)

// when evaluating scripts, just pass another parameter:
    withSourceURL: URL(string: URLString))  // add withSourceURL to enable remote debugging

PATH_TO_YOUR_FILE is a valid path path to your Bundle files. Feel free to obtain your script contents from any source.

URLString, is the key component which enables JavascriptCore remote debugging. If not set, JavascriptCore won’t be able to, for example, tell where to download source code or map files from, but most importantly, how to organise your script files in Safari’s javascript console source explorer.

Setting a convenient URLString is paramount. E.g. you can set it to an external url where your map files exist. Don’t forget to use a URL prefix like file:// or http://.

You could also enable these checks in Safari’s Developer Menu to auto pause JavascriptCore’s execution on run. Actually for each evaluated script.

One final note. This debugging capability will also be enabled in production after submitting files to the store. Since withSourceURL parameter is optional you might want to set an #if debug latch to use it or not. Important to know is that only iOS accounts that are present in the app’s provisioning profile will get JavascriptCore’s debugging capabilities enabled. For other users it will be disabled.

Javascript native wrappers in V8 — Part I

While embedding v8, exposing native objects in Javascript is mostly unavoidable. As easy as it might sound the process involves a very specific set of steps as well as certain design decisions. From Javascript standpoint, there will no difference between a pure Javascript object and a wrapped one. For example, you must be able to extend a native object prototype with a plain Javascript object or vice versa.

The link between Javascript and native world is bidirectional. A native wrappable object, should refer to a single Javascript object (not necessarily, but it will save a lot of headaches at a later stage with garbage collection). This is done by keeping a reciprocal reference between a v8::Persisten<v8::Object> and a native object, or bridging between the two.

It is important to note that now, two inter-related objects exist, a Javascript wrapper and a native wrappable. Both have different lifecycles, and we must bind them in the right way.

Design decisions

Native wrappable objects will extends a base class object: Wrappable. Its purpose is to keep track of the Javascript object wrapping a native object (hence its name) and also does the heavy lifting to associate a native object with a Javascript object, like invoking the constructor, setting Javascript object’s native pointers, etc.

Wrappable will rely on a WrapperTypeInfo to configure itself. Each potentially wrappable exposed in Javascript will define one such struct. For example, for an actual Event object I define:

const WrapperTypeInfo V8Event::wrapperTypeInfo = {
        V8Event::InterfaceTemplate,        // configuration function
        "Event",                           // js object context
        nullptr                            // inherit from

For code simplicity, I will define two cpp files for each object I’d like wrapped and exposed in Javascript. For the Event object sample, a file Event.cpp will keep the delegated methods invoked from javascript. Don’t forgetEvent class extends Wrappable so inherits the possibility to generate the Javascript wrapper on demand. The other file, V8Event.cpp contains all javascript related stuff, like the constructor function, the list of accessorsfunctions, etc.

Another design decision I make is about object extension. Any Javascript object extending another object, will reflect in its native wrappable extending the another wrappable. Basically, makes sense an object extending another in Javascript, necessarily means c++ class extending another one.

Configuring wrapper objects

Depending on whether we expect this to be part of the global context object, we will create either a v8::FunctionTemplate or a regular v8::Function object.

In this article, I will be creating an Event object in Javascript, which is actually delegating all its functionality to a Native c++ wrappable instance.

FunctionTemplate structure

As we know, a javascript function, is a first class object. It can be called, can be called as a constructor (instanced), which will create a prototype chain if needed, and the function object itself can hold functions and variables. All this translates directly into native code, where a v8::FunctionTemplate object (which we will name interface_template from now on), exposes two methods returning a Local:

// prototype function template
Local<ObjectTemplate> prototype_t= interface_t->PrototypeTemplate();
// instance function template
Local<ObjectTemplate> instance_t = interface_t->InstanceTemplate();

So, for each wrappable object, we will manage 3 different places to add native bindings code to:

  • Prototype template will be used, of course, to define accessors and function on the prototype.
  • Instance template will be used to add accessors or functions to an instance resulting from calling the constructor function.
  • Interface template will be used to add accessors or functions to the constructor function itself. These won’t be accessible from any instance or object prototype though.

Exposing a Constructor function in Javascript

The constructor function is the entry point to instantiate Javascript objects and bind them with native wrappable objects.

The first thing would be to make our instantiation function available in Javascript. This is done by exposing our interface_template in the global context object (or any other object). Something like:

        v8::String::NewFromUtf8(isolate_, "Event"),

When in Javascript the Event object is created by calling new Event(), the wrapper will invoke our supplied construction callback. This constructor callback is defined as:

// c++ function invoked when new Event() is called in js.
// see "Javascript-native object relationship" 
// constructor length, e.g. number of parameters

This CallHandler function, is an special native constructor delegate. It is responsible for associating a Javascript with its native wrappable when invoked from javascript as new Event(), but it also must handle the situation when an existing native object, just needs be wrapped and be available in javascript. We’ll see how to do this later in the article.

The CallHandler, has a signature of a regular native function callback:

void constructorCallback(const FunctionCallbackInfo<Value> &ci);

Adding accessors

Accessors will act as properties’ getter and setter function callbacks. Normally I will add accessors in the prototype or instance templates. Or both. But they can be added as well to the interface template. Once in Javascript, there will be no difference with a regular object variable, except for the fact that behind the scenes, a native object is accessed, and a native value is wrapped as a Javascript type.

For accessors in prototypes or instances, we need to add a regular v8::FunctionTemplate representing the getter or setter to the corresponding v8::ObjectTemplate.

Adding accessor to the interface template is a bit different, since its type is v8::FunctionTemplate, and not v8::ObjectTemplate.

Doing this is mostly trivial:

void native_getter(const FunctionCallbackInfo<Value> &info) {

void native_setter(const FunctionCallbackInfo<Value> &info) {

// getters don't need parameters, we pass 0.
Local<FunctionTemplate> getter = v8::FunctionTemplate::New(
        isolate, native_getter, Local<Value>(), Local<Value>(), 0 );

// setter need a parameter, so we pass 1.
Local<FunctionTemplate> setter = v8::FunctionTemplate::New(
        isolate, native_getter, Local<Value>(), Local<Value>(), 1);

// these getter/setter function callbacks don't need a prototype.

// create an accessor name
local<String> name = String::NewFromUtf8( isolate, "prop" );

// binding:
        getter, setter,
        attribute         // see v8::PropertyAttribute

Now, whenever we call from Javascript ev.prop, the native_getter function is invoked. Our wrappable code starts to make sense now.

We must monotonically perform the same process for each accessor we’d like to have in our Javascript wrapper objects.

Adding functions

Another functionality wrapper objects need is functions. As in the case of accessors, Javascript function callbacks must be defined. Simply enough:

void callback(const FunctionCallbackInfo<Value> &info) {

v8::Local<v8::FunctionTemplate> function_template =
               numberOfParameters );

// again, prototype not needed for this callback function

    attribute           // see v8::PropertyAttribute    

Again, repeat this process for each function we want to be available in our Javascript wrapper.

Javascript-native object relationship — Javascript Instantiation

When code like this is executed in Javascript

 new Event();

the registered constructor callback is invoked. It will also add all defined accessor and function bindings, create the prototype chain, etc. As a native function callback, the signature is:

void constructorCallback(const FunctionCallbackInfo<Value> &ci)

It has three main responsibilities:

  1. abort object creation if this is not a valid constructor. For example, TouchList is not instance-able by constructor, so it is safe to throw an exception here. A call of the form will do the trick.
isolate_->ThrowException( v8::Exception::Error( v8::String ... 

// don't forget to return from constructorCallback after throwing...

2. Generate native wrappable instance and associate it with the Javascript object:

// create a wrappable
Event* event = new Event();

// we associate a native Event object, with constructorCallback's
// holder. Holder() points to the object being constructed in 
// javascript.
v8::Local<v8::Object> wrapper = ci.Holder();

// Event instance holds a v8::Persistent<v8::Object> reference
event->wrapper_.Reset(isolate, wrapper);

// Manage gc.
// e.g:
//    event->wrapper_.SetWrapperClassId( ...
//    event->wrapper_.SetWeak( ...
// or
//    event->wrapper_.ClearWeak();

3. Bridge the Javascript object with the native wrappable

wrapper->SetAlignedPointerInInternalFields(0, event);

After this code, a Javascript object has a pointer to the wrappable c++ object. And the wrapple c++, has set a v8::Persistent handle to the same Javascript object.

Accessor and Function callbacks

For the three of accessor getter, setter and function callbacks, we must specify a function of type v8::FunctionCallback, that is

void fnName(const FunctionCallbackInfo<Value> &info);

How the Javascript object accesses its native wrappable object is as follows:

void fnName(const FunctionCallbackInfo<Value> &info) {
    Local<Object> holder = info.Holder();
    Event* event = reinterpret_cast<Event*>(
    info->GetReturnValue().Set( event->name );

The key here is where to obtain the native pointer to the wrappable object. And that will always be the Holder object on every FunctionCallback.

Javascript-native object relationship — Native Instantiation

Sometimes, we want to expose in Javascript a native wrappable object w/o it being created from Javascript. For example, a pointer data generated in native needs to make its way through to Javascript.

Event* ev = new Event("load"); 
// set wrappable properties
ev->target = this;
ev->currentTarget = this;
// create a Javascript object to reference to this ev
// native instance.

For this to happen, we need to manually create a Javascript object of the needed type. We have the interface template so seems like a trivial operation, just (deprecated: call it as constructornew instance the interface template function.

We would be mostly done, except for the fact that this code will invoke the constructorCallback previously defined. Which will create a new Event instance. Our constructorCallback must therefore be aware of the fact that an existing native object is being wrapped instead. Other than that, the code will be the same for the constructorCallback.

// Signal constructorCallback to wrap an existing object, 
// instead of creating a new one. RAII on isolate's private data.
Config::ConstructorMode p = Config::Status::CurrentConstructorMode;    Config::SetCurrentConstructorMode(Config::kWrapExistingObject);
v8::Local<v8::Object> wrapper = interface_template->

// do wrapper association just as in the javascript instantiation
// example.
// restore constructor mode to previous one.

Needless to say , I also trivially check for an already existing Javascript wrapper for the native object.

Prototype Inheritance

Inheriting a prototype for a wrappable object can only happen from a native prototype. Don’t worry though, you will be able to extend a wrapper object from Javascript.

The inheritance process boils down just to one native call, and must happen at constructor function definition time. This code will extend the prototype chain of our wrapper object:

// function_template->Inherit( v8::Local<v8::FunctionTemplate> );
// this function template is the previously defined 
// interface_template

Object naming

By default, wrappable objects will identify themselves in chrome devtools console as the infamous: {} that is, no name. To properly have our wrappables named in dev-tools, we’d need to do a couple things:

  1. set the constructor function class name.
interface_template->SetClassName( v8::Local<v8::String> );

This will properly name our wrapper exposed constructor function. Unfortunately, this won’t be sufficient, since our wrapper’s prototype object still won’t show the expected naming. To fix this, a trickier approach must be taken:

2. name our prototype object:

        v8::Local<v8::String>,  // desired string representation
        v8::PropertyAttribute); // e.g.: v8::ReadOnly | v8::DontEnum

Putting it all together

I have created this repository where all things covered in this article have been placed. It is a compilable and runnable version of this post.


There is still some other stuff that native wrappable objects can do. Indexed properties, which would accept indexed access on them like an array or for the DOM TouchList object. Interceptors, making an object callablereceiverssignaturesinheritance… v8’s bestiary is fairly interesting and each of the creatures deserve one or more posts to describe them.

At first glance it is not easy to create a Javascript wrapper. Too many steps, for something conceptually simple, specially if we compare it with defining a plain Javascript object. That’s Javascript’s magic. It abstracts away all the dirty details from the developer, like GC, instantiation, wiring, etc. This article might be also useful to realise how complex a browser can just be. We just scratched the surface and shaw how to wire the simplest native object possible.

I’d like to make an explicit disclaimer here. All credit of this article must go to the V8 and Chromium project developers. Being able to scavenge through the whole browser source code has been an invaluable resource to figure out stuff, and to learn very solid foundations on v8 development. On my side I just made the effort of collecting low hanging fruit.

V8 wrapped objects lifecycle

Objects in V8, for the types of handles that can hold them can be primarily: Local or Persistent. Though there’s another handle type: Eternal which live for the lifetime of the Isolate, thus never being garbage collected.

Local represents a short lived object, from v8 header file itself: light-weight and transient and typically used in local operations. As soon as the HandleScope managing this Local handler is destroyed, the object wrapped is invalid, and eventually, garbage collected.

Persistent handles, can be used to store objects across several execution units. These objects will eventually be garbage collected.

While embedding v8, I mostly always need to get a native object exposed in javascript, and this is done by pairing a Persistent handler with a native object. On average, these native objects are created/destroyed as my javascript code flows, and its native object counterpart needs to be destroyed and freed accordingly. For this purpose, I set a weak handle callback like:

class JavascriptWrapper {
    v8::Persistent<v8::Object> wrapper_;
    public void Wrap( v8::Isolate*, ... ) {
        // set weak handler
        wrapper_.SetWeak( this, 

    // the weak handler function is as follows:
    static void weakCallbackForObjectHolder(
        const v8::WeakCallbackInfo<HC::Wrapper>& data) {
        delete data.GetParameter();

This function will be invoked as soon as the javascript object is garbage collected, allowing me to reclaim all native side resources this object held.

Sometimes, I need to keep an object around until certain operations finish. For example, an Image object should be around until its async download process ends and gets the opportunity to notify its callbacks, avoiding garbage collection during the process. This is accomplished by tagging the Persistent handle as not weak by calling:


This prevents the GC from reclaiming my object. Think of a javascript object like this:

const image = new Image();
image.addEventListener(“load”, (e)=> {...});
image.addEventListener(“error”, (e)=> {...});
image.src= 'http://...';

This code is likely to destroy the image object before its callbacks had been notified. (Note thatimage is not referenced at all, it is just defined and forgotten in javascript).
Since I expect either load or error callbacks to be notified, I must prevent GC from kicking in, and ClearWeak does exactly that. Later, when the callbacks have been notified, I can natively flag the Persistent handler available for garbage collection by calling SetWeak(...) as in the example above. This ClearWeak/SetWeak combo gives me full control over my wrapped objects’ lifecycle.

Private References

There are some other times when I just need to bind the lifecycle of an object to another one. For example, a TouchEvent contains a TouchList object, and I want to bind their lifecycle together.

For this purpose, v8 also provides a Private property utility. As you can image, these properties will be inaccessible from javascript. To create a private property, just call:

v8::Local<v8::Value> v8Value= obj->Wrap(info.GetIsolate(), ...);

// create a private property
v8::Local<v8::Private> priv= v8::Private::ForApi(

// assign this property to the object:

With this I just get an interesting effect, which is keep the TouchList object alive while the TouchEvent exists and no one can modify or break this bond from javascript.

There’s nonetheless another stage where my wrapped native objects deserve special attention, and this is at Isolate destruction time.

Isolate destruction

Garbage collection must not be relied on to reclaim any object. In fact it might not fire during the javascript program lifecycle.

Under this premise, all native wrapped objects need a chance to be freed upon Isolate destruction. Specially if you expect to create another Isolate and avoid expensive memory leaks. How can we Identify our Persistent handles for special treatment is done by tagging them with a call to:

wrapper_.SetWrapperClassId( int16_t_tag );

Later on, when the Isolate is being destroyed, I must do an explicit call to:

isolate_->VisitHandlesWithClassIds( &phv );

phv is an instance of a class like:

class PHV : public v8::PersistentHandleVisitor {

    v8::Isolate* isolate_;

    PHV(v8::Isolate* isolate) : isolate_(isolate) {}
    virtual ~PHV() {}

    virtual void VisitPersistentHandle(
        v8::Persistent<v8::Value>* value,
        uint16_t class_id) {

        // delete persistent handles on isolate disposal.
        if ( class_id==HC_GARBAGE_COLLECTED_CLASS_ID ) {
            v8::HandleScope hs(isolate_);
            Wrapper* w = // extract your wrapped object from
                         // the passed-in value object.
            delete w;

As you can see, handling native objects is actually pretty straightforward. Another demonstration of how delightful is to work with embedded v8.

V8 inspector from an embedder standpoint

It’s been recently that the old V8 debugger API has been removed from V8’s source code in favor of the more modern Inspector API.

This Inspector API is great, and allows me to debug my embedded Android V8 code using Chrome dev tools directly from the browser, or by using an standalone version of them. Profiling, memory dumps, source maps, breakpoints, all works like a charm (except minor bugs here and there mainly related to chrome version though). Unfortunately, there’s not much documentation on this Inspector integration from the embedder point of view.

Inspector integration process

The first to note about the Inspector is that inspection is per Isolate. One single Inspector object instance will be enough to debug all your javascript Context s. The Isolate is thread dependent, and as such you must keep your isolate in scope Isolate::Scope when necessary. That said, the elements that will conform your inspection code are very simple:


This object will be used to select what Context we are currently debugging, but more importantly, it will handle runMessageLoopOnPause and quitMessageLoopOnPause methods. These two methods are called by V8 debugging internals when you are breaking into js code from Dev Tools. While runMessageLoopOnPause is being called, you must synchronously consume all front end (Dev Tools) debugging messages. If not, you will not get all context information of the code you are debugging. Once V8 knows it has no more inspector messages pending, it will call quitMessageLoopOnPause automatically.

The InspectorClient could do the debugging initialisation process like this:

// create a v8 inspector client: 
// InspectorClientImpl : public v8_inspector::V8InspectorClient
v8_inspector::InspectorClient = new InspectorClientImpl();

// create a v8 inspector instance.
v8_inspector::V8Inspector inspector_ = 
    v8_inspector::V8Inspector.create( isolate, inspectorClient );

// create a v8 channel. 
// ChannelImpl : public v8_inspector::V8Inspector::Channel
v8_inspector::V8Inspector::Channel channel_ = new ChannelImpl();
v8_inspector::StringView view( ... )

// Create a debugging session by connecting the V8Inspector
// instance to the channel
v8_inspector::V8InspectorSession session_ = 
v8_inspector::StringView ctx_name( /*ctx_name*/ )

// make sure you register Context objects in the V8Inspector.
// ctx_name will be shown in CDT/console. Call this for each context 
// your app creates. Normally just one btw.

That’s pretty much it. After this, you’ll have a valid debugging session. How do you, as a dev, interact with each of these elements: V8InspectorClientV8InspectorV8Inspector::ChannelV8InspectorSession ? Well, to answer this question, first we call all this code happening in our V8-enabled app the debugging backend, which implicitly means we should have a debugging front-end.


Ideally, the debugging front-end would be Chrome Dev Tools. CDT opens a WebSocket to communicate with the debugging back-end. You can make this happen in Chrome with something like this:


This causes chrome to open a dev tools only tab, w/o most DOM specific stuff. In my case, the 20000 is a forwarded port from my android app to a local port(adb forward tcp:20000 tcp:20000) and /backend in the url is a mount point on the backend WebSocket listener. All front end inspector messages will be received on the backend websocket listening code, and must be forwarded to the debug session:

// msg is a std::string with whatever front sent to back.
// normally a json object with sequence and payload.
v8::internal::Vector<const char> v(msg.c_str(), msg.length());
// inspector session requires a v8_inspector::StringView
v8_inspector::StringView message_view(
    reinterpret_cast<const uint8_t *>(v.start()), v.length());
// let the magic happen:
session_->dispatchProtocolMessage( message_view );

The V8InspectorSession object is full of inspection love. I recommend you having a look at v8-inspector.h header file. While all interaction happens from CDT front end, you’ll recognise a lot of functionality there like breakProgram, pause or resume methods.


All inspector protocol handling happens automagically. You don’t have to worry about front end message id sequences, or their responses. The only missing part is forward inspector session message results from backend to front end. Responses happen in the custom v8_inspector::V8Inspector::Channel object implementation. Both methods:

    int callId, 
    const v8_inspector::StringView& msg);

void sendProtocolNotification(
    const v8_inspector::StringView& msg);

will handle inspector protocol responses from commands received from inspection front end. Just convert msg from StringView to std::string (or whatever your code requires) and send to front end


This is a small diagram of how things work:


At the end of the process, you’ll get a full browser-enabled remote v8 debugging session. Here’s an screenshot of a sample app. All objects but console are custom bound native objects. In this sample screenshot, the host application OS is Android.

Also note, this inspector-over-devtools integration also works on the Android emulator. In my Mac, I have an android emulator, running my app with embedded v8 and connected to dev tools on chrome to debug javascript… what a time to be alive !

Android v8 embedding guide

Project configuration

This post series will be an intro to embedding v8 in Android. The first step would be to have v8 compiled as static or shared libraries for arm/arm64. I invite you to see my other posts v8.

Then, configure your project’s .mk file. Depending on your v8 compilation type, you got libraries for snapshotno snapshot, or external snapshot. Currently, snapshots are compiled by default. These snapshots will contain base objects, like for example, Math. There’s no runtime difference among them, just different initialization times. In my nexus 5x, no snapshot takes around 400ms to initialize an Isolate and a Context, and around 30 with snapshot. The external snapshot and snapshot differ in that the external snapshot must be explicitly loaded (.bin files in the compilation output), and snapshot library is a static lib file of roughly 1Mb in size, that will be linked with the final .so file binary instead of externally loaded. Bear in mind that snapshot libs, internal or external, would require you to supply some extra native code for reading the Natives (.bin) files.

For simplicity, we’ll use no snapshot library.

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)
LOCAL_MODULE := libv8_base

include $(CLEAR_VARS)
LOCAL_MODULE := libv8_libbase
LOCAL_SRC_FILES := $(TARGET_ARCH_ABI)/libv8_libbase.a

include $(CLEAR_VARS)
LOCAL_MODULE := libv8_libplatform
LOCAL_SRC_FILES := $(TARGET_ARCH_ABI)/libv8_libplatform.a

include $(CLEAR_VARS)
LOCAL_MODULE := libv8_nosnapshot
LOCAL_SRC_FILES := $(TARGET_ARCH_ABI)/libv8_nosnapshot.a
include $(CLEAR_VARS)
LOCAL_MODULE := libv8_libsampler
LOCAL_SRC_FILES := $(TARGET_ARCH_ABI)/libv8_libsampler.a

include $(CLEAR_VARS)

LOCAL_CFLAGS := -std=c++11

# libraries in order.
LOCAL_STATIC_LIBRARIES := v8_base v8_libplatform v8_libbase v8_libsampler v8_snapshot

LOCAL_SRC_FILES := <your c/c++ files>
LOCAL_LDLIBS := -llog -landroid


TARGET_ARCH_ABI is a predefined variable in Gradle that identifies the build flavour.
LOCAL_C_INCLUDES will point to a directory where all v8 compilation resulting header files will be set.

The file is trivial:

APP_STL := c++_static
APP_PLATFORM := android-14
APP_ABI := armeabi-v7a

The directory structure should be something like this:

The armeabi-v7a folder should contain the static or shader v8 library files.

Lastly, add a product flavour to your build.gradle file, by adding to its android section:

productFlavors {
    create("arm7") {
        ndk {
            abiFilters = ["armeabi-v7a"]

With this, you should be able to compile an android app with v8 embedded. Next post will be to instantiate an Isolate, creating and setting a main execution Context, and setting a global exception handler, basic elements to have embedded javascript running in your Android application.

Squeezing v8 startup time 2

This is a second part to this other post.

The process of loading/streaming, parse, compile and run is complicated enough to handle optimally. Worse enough, when you run the same scripts over and over, it will turn quite inefficient too, since this process has to be performed every time for the same scripts.

There is something we can do for this situation though. We can obtain the parsed javascript bytecode contents and reuse it over and over again. The default script execution process, non streamed, would be something like:

ScriptOrigin origin(String::NewFromUtf8(
    isolate, constCharPtrToString));

v8::MaybeLocal<v8::Script> maybescript = v8::Script::Compile(
    v8StringSource,   // Local<String> object with source contents.

if (!maybescript.IsEmpty()) {
    Local<Script> script = maybescript.ToLocalChecked();
    if (script.IsEmpty()) {
        // report exception here
    } else {
        Local<Value> result = script->Run(v8Context).ToLocalChecked();

But, on the first script execution, we can change this run process to something like:

// source is a Local<String>
ScriptCompiler::Source source(source, ...);   

// create unbounded script (see v8.h)
auto unboundedScript = ScriptCompiler::CompileUnboundScript(

// obtain parsed bytecode for resusability. 
// Store for reusability and profit
auto bytecode = new BytecodeCacheEntry(

// if you want to run this script, you must:

// and just
auto result = script->Run(context).ToLocalChecked();

// check for result, etc.

Successive executions of this script could benefit from the bytecode like:

// build a BytecodeCacheData from a uint8_t* contents and size
auto cache_entry = new BytecodeCacheEntry(contents, size)

// Once we have the BytecodeCacheEntry contents:
ScriptCompiler::Source sc_source(source, origin, cache_entry);

// Create compiler
script = ScriptCompiler::Compile( context, &sc_source,

// check is script is valid, etc, and run:
auto result = script->Run(context).ToLocalChecked();

Saving the loading andparsing stages that could be quite time costly. The profit is around 30-50% smaller first frame times.

Compiled v8

I have set up a github repository with pre-compiled versions of v8 arm and arm64, and instructions on what each specific version needs for being successfully compiled.

I have tested all these versions for Android, and are ready to use in your own projects. Single static lib, and snapshot/no-snapshot flavours are included.

Versions available are from 5 to 8. Will upgrade regularly as I continue developing.

Compile v8 arm, arm64, ia32

Quick guide on how to compile v8 8.4-lkgr for Android on Ubuntu in one copy/paste script. I use this script regularly, so it is well tested.

If you are looking for some already pre-compiled ready-to-use v8 version, you can find them here.

# Install git:
sudo apt install git

# Install depot tools
Follow instructions

# Fetch v8 source code.
# Use the branch of your choice. In will use 8.4-lkgr (last known good release).
# I'd advise always using -lkgr branches
fetch v8
cd v8
git pull
git checkout 8.4-lkgr

# Enter v8 folder.
cd v8

# Install all dependencies, ndk, sdk, etc.
# This may take a while. It downloads android tools: sdk+ndk, etc.

# Set android target. This op will take some time, ndk alone is +1Gb
echo "target_os = ['android']" >> ../.gclient && gclient sync

# Generate compilation target: 
# Change android_arm.release to the folder name of your choice, in this
# case:
# Use this to compile for arm/arm64
tools/dev/ arm.release

# Use this to compile for x86
tools/dev/ ia32.release

# Edit gn configuration file: 
# I’d recommend disabling icu support, and set 
# symbol_level=0 for faster compilation and thinner
# output libs. You can get the whole list of 
# compilation options by executing: 
# `gn args —-list` 
# Optionally set `target_cpu="arm64"` or `target_cpu="x86"` (if ia32 was used)


# This is my file contents:
android_unstripped_runtime_outputs = false
is_component_build = false
is_debug = false
symbol_level = 1
target_cpu = "arm"
target_os = "android"
use_goma = false
use_custom_libcxx = false
use_custom_libcxx_for_host = false
v8_target_cpu = "arm"
v8_use_external_startup_data = false
v8_enable_i18n_support= false
v8_android_log_stdout = true
v8_static_library = true
v8_monolithic = true
v8_enable_pointer_compression = false

# to compile arm64, just change target_cpu and v8_target_cpu to arm64

# Compile target: 
# This may take up to 1 hour depending on your setup.
# Optionally use a -j value suitable for your system.
ninja -C v8_monolithic

# Fat lib file has been generated by v8_monolithic parameter at
#<e.g. android_arm.release>/obj/libv8_monolithic.a 

# source headers, for inspector compilation.
mkdir -p src/base/platform
mkdir -p src/common
mkdir -p src/inspector
mkdir -p src/json
mkdir -p src/utils
mkdir -p src/init

cp -R ../../../../src/common/*.h ./src/common
cp -R ../../../../src/base/*.h ./src/base
cp -R ../../../../src/base/platform/*.h ./src/base/platform
cp -R ../../../../src/inspector/*.h ./src/inspector
cp -R ../../../../src/json/*.h ./src/json
cp -R ../../../../src/utils/*.h ./src/utils
cp -R ../../../../src/init/*.h ./src/init

# copy v8 compilation header files:
cp -R ../../../../include ./

# For compilation on Android, always use the same ndk as 
# `gclient sync` downloaded. 
# Enjoy v8 embedded in an Android app

Compile for Android emulator

tools/dev/ gen ia32.release
# edit, to contain the following:
is_debug = false
target_cpu = "x86"
use_goma = false
target_os = "android"
v8_use_external_startup_data = false
v8_enable_i18n_support = false
v8_monolithic = true