Sending file through WebRTC


Back in 2016, I’ve worked on an application called ShareTC that allowed users to share their files through their web browser using the WebRTC technology. The application was working well but the promise was to be able to share files without any server. The first version used PeerJS, a library allowing you to implement WebRTC data channel for file sharing easily with the drawback needing a server for what is called the signaling process. Thus, I’ve made some improvement to the project to allow user to share their files without even using a signaling server.
For those who are no interested by the technical implementation you can directly have a look to the demo on GitHub.


Originally, the project was barebone native JavaScript but I decided to upgrade the project to Angular 8, because exchanging the signaling configuration is a little complex, I needed some framework for easy DOM manipulation.

Because this is a side project, I didn’t want to recreate a component library so I used the @angular/material library for the forms and user feedbacks.

I also used @angular/flex-layout to easily define the layout, knowing that the application design is really basic as it is not the primary intent of the demonstration.

How it works

So, the interesting part now, the application will try to create a data channel between the sender and the receiver, using WebRTC. The protocol needs to know to whom it should send the data, this is named peer discovery, and it can be done automatically through what is called a signaling server, a server where you exchange your configuration to begin the communication and this is the component that I wanted to remove. For the manual configuration we use the Session Description Protocol (or SDP).

But because the network is a little complex, in some cases, you’ll need a Session Traversal Utilities for NAT (or STUN) server. This is used when you’re exchanging a file from behind a Network Address Translation (or NAT) firewall. There are some public servers available for this like:

In some other cases the communication channel will be blocked by firewalls. In this case you can eventually use a Traversal Using Relays around NAT (or TURN) server that will work as a Data relay between you and the peer. They are expensive and only necessary if you’re behind a blocking firewall so I didn’t integrate a solution for that ­čśŐ

So, what do we do is, we generate the sender configuration SDP, that you will need to share with the receiver. Then he’ll be able to generate his SDP configuration, that you’ll need to import in order to create the communication channel. Then you’ll be able to send the file using Datagram Transport Layer Security (or DTLS) which is a secure communication channel.

How to make it happen

Now that you have the WebRTC communication scheme in mind, let’s see some interesting parts of the project. You might want to have the code open to better understand this part by accessing the project repository flyersweb/sharetc.

First, I chose to use the great feross/simple-peer library for WebRTC communication channel. To use the library we had to add some polyfills (check polyfills.ts) to the project for the global, process and buffer objects like this:

(window as any).global = window;
(window as any).process = {
    nextTick: setImmediate,
    env: { DEBUG: undefined },
(window as any).global.Buffer = (window as any).global.Buffer || require('buffer').Buffer;

If you’re using typescript you also might have to add the node declarations for native modules by modifying your file with:

"types": [

I’ve also decided to compress SDP configuration using LZ-based compression so it is easier to share. This is why the configurations will look more like this:


But by decompressing it you’ll get a valid SDP signal message.

And finally, in order to send and download the file we use the JavaScript File API so we can send it through the data channel. Thing is, the data will be stored as an ArrayBuffer so we generate a download link through a Blob for the file to be downloaded with the name file.dat as shown in the download component.

To exchange messages through the communication data channel we also need to convert the ArrayBuffer to string by using this function:

function ab2str(buf: ArrayBuffer): string {
  return String.fromCharCode.apply(null, new Uint16Array(buf));

You can check how it is used in the upload component.

The demo

Because you have to exchange the SDP configuration and create the data channel before sending the file, the application is a little more complex to grasp. So to use the demo application you’ll need to:

1. Send your sender configuration
2. Receiver will enter your configuration and hit connect (it connects to sender)
3. The receiver will the share his generated configuration with you the sender
4. Then the sender hit connect to create the data channel
5. The duplex stream is up and running

You can access it by accessing


This was a long road to have the application that I wanted. To allow users to share their file securely without any server, but I’m really happy to have achieved it besides the network complexity and thanks to the projects that make using WebRTC each day easier.

You can check the project source code on the github repository flyersweb/sharetc and give me a tip so that I can buy me a coffee to work more on this ­čśŐ

Embedded Browser SQLite: SQL.js


Sometimes your application will provide so little data, that it may be overkill to use an API to access it. In such case you might want to embedded the database within your application. Actually, this technique is used for a while in mobile applications. It can be useful to speed up low bandwidth devices, boost application launch and more. Thing is, I'm a web developer and I wanted to be able to do the same in one of my web application.


The recommended and compatible way to store data in Browser is by using the IndexedDB. Thing is, it uses a different approach than pure SQL, as explained in documentation. The API is also tedious to use, that is why MDN recommend to use more programmer-friendly library for simple usages. And to finish, IndexedDB format being younger it is not as supported as SQLite.

Based on that, I wanted to try to load a SQLite database into my browser and request it. The database contains some 13 thousand entries about file extension information. The application will provide the company and software associated to the file extension searched by users.

Web Assembly

MDN describes Web Assembly as a way to run code written in multiple languages on the web at near native speed, with client apps running on the web that previously couldn’t have done. It is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages such as C/C++ and Rust with a compilation target so that they can run on the web. It works by compiling code to LLVM byte-code and loading it using Emscripten.

How Web Assembly works

Usages can be :
    • Porting a C/C++ application with Emscripten.
    • Writing or generating Web Assembly directly at the assembly level.
    • Writing a Rust application and targeting Web Assembly as its output.
    • Using AssemblyScript which looks similar to Type Script and compiles to Web Assembly binary.


A while ago I’ve heard about a project to load and parse PDF files directly in the browser and learned that it uses a port of PDFium. So I begun to search for a port of SQLite client and by chance, found the SQL.js awesome project done by Alon Zakai. It provides a port to Web Assembly and a JavaScript loader.

To install, it was a breeze as there is a first release and an npmjs package. You can check the documentation for different usages.

npm install --save sql.js


So I decided to start working on this, because I didn’t find any API nor database listing file extensions, something I needed for my Torrent crawler project. I made a SQLite database and created a React application to showcase it.

While in the process of implementing the solution I found some blockers that I wanted to share for those having the same issues.

To run SQL.js the module need to load the WASM file in order to parse the SQLite database. Thing is, because I used Webpack 4, I always got the following error when trying to load it from the bundle :

`Web Assembly module is included in initial chunk. This is not allowed, because Web Assembly download and compilation must happen asynchronous.`

Making some researches learned me that Webpack is not fully compatible with Web Assembly for the moment, so I couldn’t include it with my bundle. So I hacked my way by loading directly the wasm file from SQL.js repository (not a big fan of that but it worked) :

let config = {
    locateFile: filename => >`${filename}`

This way you can pass the `config` object to SQL.js initialization function and it will load the was from the specified location.

Pack the database

I also needed to embedded the database with my application, this was more straightforward as Webpack provides a file-loader to include any binary file to your project. To use it you just have to add this to your Webpack configuration file.

    test: /\.(sqlite)$/i,
    use: [
        loader: 'file-loader',
        options: {
          name: '[name].[ext]'

After that, you just import your database to get the URL on your server. I did it this way :

import database from '../assets/database.sqlite';

Load the database

Now that we have our SQL.js initialized and our database packed with our application we need to load it as SQL.js do not provide database loading by URL. The documentation shows you how to do it, I used the same code and specified the database path.

var xhr = new XMLHttpRequest();'GET', database, true);
xhr.responseType = 'arraybuffer';
xhr.onload = () => {
    var uInt8Array = new Uint8Array(xhr.response);
    var db = new SQL.Database(uInt8Array);

Request the database

To finish, I added a simple query in the application to allow users to search by file extension using the `db.exec` function provided by SQL.js.

You can check the relevant code in the `Search.js` React component in the project. It will search based on requested extension and update the Redux store afterwards.

See the demo

You can check the project source code and demo at

Hoping that it will be useful for those who wants to embedded some light databases with their browser applications. Moreover I was thinking that by not accessing an API to request the database on a server, this removes server security breach through SQL injection.

I also wanted to thanks Cory House for his coryhouse/react-slingshot starter kit with React + React-redux which I used to make this application.

Upgrading Angular Symfony in 10 Steps


In 2013 I got some free time and provided a bootstrap project for people who wanted to create a website with a Rest API based on Symfony 2 and AngularJS. Back in the day, the project attracted some attention and some contributors begun to work on it too. It was more and more forked. But, for  lack of time, I couldn’t maintain the project and upgrade to the new software versions. So it got very outdated.

The good thing is that I took a moment to upgrade the project recently to Symfony 4 and Angular 8. So yes it was a very big gap between the software but because of the very limited size of the project it wasn’t so much of a pain. In the process I’ve learned some tricks that I wanted to share with you.

In order to make things clear I want to remember that the project use Web Service Security standard for API communication and use the REST protocol.

You might want to have the project opened in another window in order to fully follow the steps to upgrade :


The project use a technique called WS-Security UserToken in order to authenticate the connected user. You can have a detailed explanation by going to OASIS specifications :

The simplified process is as follow :
  • client access the shared secret by authenticating with the server
  • client generate a user token by using a nonce (random string), a created (date time) and a secret (shared with the server)
  • client send a token with a different nonce for each subsequent query

The server knows at which date time the token was created, it also can detect replay attack if the nonce is sent twice and it can authenticate the client through the shared secret.

Now that we had a quick refresh let’s dive in the upgrade process.

Upgrade Node

The project was using NodeJS 5 and we upgraded it to NodeJS 12. Because it is a big gap, we better started a new Angular 8 project from scratch based on webpack.

Upgrade PHP

The project was using PHP 5 and we upgraded it to PHP 7. There were a lot of breaking changes and some real improvements between these two versions so we also started a new project from scratch using Symfony 4.

Upgrade Angular

Between AngularJS and Angular 8 there is an abyss. The frameworks are so different that it was easier to start over a new project by copy/pasting the important parts of the algorithm. It was also a good exercise to add some improvements to the existing code.

This is how we removed a useless custom Base64 encoding function and preferred the CryptoJS version.

I’ve been searching online and didn’t get any direct usage of a Base64 encoding using CryptoJS, this is how I did it :


You first need to parse your string using the correct encoding then use the toString function specifying the Base64 output.

We also removed the custom random string function and added the ‘random-string’ dependency. FYI there is a ‘randomstring’ dependency but it didn’t work on my laptop saying that there is no global defined, maybe the NodeJS version break something for that dependency.

The Angular structure totally changed but you can find all the interesting parts in the ‘token.service.ts’ file.

Upgrade Symfony

Same as before, the gap between Symfony 2 and Symfony 4, was so big that we just started a new project from scratch. The implementation of the WS-Security UserToken is available in the official Symfony documentation

The implementation we used is exactly the same. FYI, the regular expression to parse the X-WSSE header wasn’t working because of escaped double quote coming from Symfony request headers. Besides that it’s all the same.

It was also a good excuse for some improvements, so we preferred the use of Nelmio/CORS in lieu of a custom request listener for Cross Origin Request. We added FOS/FOSRestBundle to manage the REST API part more easily and upgraded the FOS/FOSUserBundle dependency.

Thing is, this last dependency is not fully compatible with Symfony 4 so we had to use some tricks to make it work. So while you’re installing the dependency by following the instructions you might have to.

First, move the configuration to ‘config/packages/fos_user.yaml’. Second, add the FOSUserBundle routes definitions to ‘config/routes/fos_user.yaml’.
Finally, we generated a migration and created the User entity in database using the following command :  
bin/console doctrine:migrations:diff && bin/console doctrine:schema:update –force’

Additionally we decided to set logs to stderr because the project is dockerized. This way you can have logs in real time in your docker daemon. To do that it was necessary to update ‘monolog.yaml’ configurations to use ‘php://stderr’.

Upgrade database

While working on the project I decided to migrate the existing MySQL database to PostgreSQL. Because PostgreSQL is such an amazing project, being so powerful and well maintained. I do think that MySQL might have reached this quality without all the commercial fuzz around it, but that is another story.

So moving from MySQL to PostgreSQL was actually really easy on this project, we just had to install the correct database, add pdo_psql to PHP and update the connection URL in ‘.env’. I really loved the new way to configure Symfony 4 it make it a breeze.

We also decided to improve a little the project by adding doctrine fixture for the sample user for the demo.

Staying connected

To be able to stay connected after a page refresh, we had to use the localStorage to store the token generation data. This way we can come back to our client and still be connected. Tokens have a lifetime of 5 minutes.

You can change this lifetime in the ‘WsseProvider.php’ file if necessary.

Update docker configuration

By upgrading so much of the project, we could actually make some significant improvements to docker configuration. On the Angular part, the building and watch mode process is now natively supported. On the Symfony part, the NGINX configuration through PHP-FPM was also much easier without necessity for custom configuration anymore. You can have a look to it in the dockerify folder containing the ‘docker-compose.yml’ infrastructure and the nginx configuration files.

Please be aware that the project is in development mode with watch mode activated, it is not suited for production deployment as is. The idea is more to offer a bootstrap project for developers to begin working on their project with authentication out of the box. You might have to configure your own Continuous Delivery System for deployments.

Update License

To finish the License was also upgraded to MIT License. So you're free to use, modify and/or redistribute this software. The README was also upgraded with latest installation instructions.


Thanks for reading, I hope that the project will be useful to you. You should have a look to the README at for more details.

Most seen