itk-wasm, hellow world , nodejs, not working

Dear Anders,

I have a beginner question. I spent almost two weeks to put together my development environment in windows, wsl+ubuntu2004, macos.
I use virtual wsl2 subsystem in windows as linux backend. my frontend is vscode(client) in terminal i have wsl terminal. the example i refer to is following link: Hello WASM World! | itk-wasm

  1. I can run wasi version which shows (Hellow WASM World), but I cannot do nodejs part. It fails with following error: unexpected tocken ‘export’

can you help me fix my environment. I attached wsl+ubuntu virtual linux picture so you see index.mjs code and the output error.

I already appreciate it


I don’t have experience with this, but hopefully others (maybe @matt.mccormick) will be able to give advice.

I appreciate it.

kind regards,

Hi Matt,
I am stuck in this itk-wasm using nodejs. I explained what is happening, could you please help me with this? I am really desspred to get this to work so that i could go ahead to do some segmentation, registration, etc.

Kind regards,

On googling the export error I found this post which might be helpful. Please check the versions.

1 Like


Hi Prnjal,
I got it to work i did following tasks to fix my environment:

  1. I reinstalled VSCode again in the suggested default folder …\APPDATA\LOCAL…
    This installation gave me WSL Linux ubuntue 20.04 under docker and connected to the wsl terminal. thus, i could run wsl linux as backend and vscode as front end. Now i can dockerize it and launch on the cloud.
  2. I ran [npm init] to create package.json and then added the [type=module]

as you can see below it worked. i think the type=module did the trick. but my vscode installation was not right either!



@lassoan ,
Thanks for the response i got it to work, i had problem with vscode installation, secondly i hade to identify my project which contained index.mjs to be of type=module inside package.json of node project and that did the trick. Now i can run wasm/wasi in linux/WSL2 on my vscode and use ITK/C++ in that and compile and wrap javascript around c++ on it and take it to mobile world. exciting stuff.

Anders i also have questions from you about the project you are heavily invovled in. 3d Slicer. I am also very much into medical image processing it is one of my expertise. I have this question: I have a simpleITK dockerized inside a linux and built. I like to run my image processing stuff like segmentation , registration, 3d stuff, filtering and detecting all kinds of abnormalities in MRI/CT images and then my mobile front end hopefully my javascript can run the results inside my frontend after downloading the images. what are you thoughts on this: I do save the image/2d/3d processed inside files and bring it back to the frontend downloaded and start playing with it, or what do you recommand, do you do the same thing in the 3d slicer. is paraview based on full stack backend on cloud and front end javascript? please enlight me with your great experience.


You can run 3D Slicer in a docker container and use its REST API for remote control and remote rendering. In some projects (for example, when Slicer is used as a Jupyter notebook kernel) we use noVNC to make Slicer’s interactive viewers directly available in a web browser.

1 Like


Hi Andras,

Thanks for response, i will try it very soon.

I have two questions:

  1. how do you use opengl in slicer, because i want to do some presentation on cloud back end side and i have read about webGL but still not able to implement it yet. What do you use instead of opengl. I need to know in the pipeline i create a 3d segmentation, where opengl context created? and how could I implement it in back end on cloud, i could not use WebGL because i dont know how to create an ITK pipeline using webGL. Did you rewrite the ITK/VTK/… classes, did you use webassembly(wasm+wasi), wasm/wasi + WebGL? is there anyway i can plugin webGL to the 3d segmentation pipeline and get a webGL context inside my mobile, presenting the 3d segmented image?
    Please forgive me if my qustions are very primitive or off the target.


We run Slicer in docker, with OpenGL rendering. In most cases we use software rendering, but when we need to render large meshes or volumes (using raycasting) then we use GPU acceleration using virtualGL.

This is really simple, because we don’t need to use and webassembly, WebGL, etc., we can keep using ITK and VTK in C++ or Python. It is also nice that the rendering capabilities don’t depend on the web browser (you can render arbitrarily large volumes - whatever your server can handle) and data does not have to be transferred to the client’s browser. Of course the disadvantage is that the server has to be powerful (as it performs the rendering); and for simple scenes remote rendering has higher latency.

You can also launch local applications directly from the browser. The user experience can be quite smooth - the user just clicks on a link with a custom protocol (e.g., slicer://viewer/ ?studyUID=2.16.840.1.113...) and the associated local application starts automatically, downloads the data based on the query parameters, allows the user to work on the data, then uploads to the location specified in the query parameters. This is the same way how you edit MS Office files on OneDrive or Dropbox, using locally installed Word/Excel/PowerPoint applications.

1 Like