Using your OBS output as input for your webcam in Debian

During last weeks I’ve been playing with Open Broadcaster Software (OBS) Studio, or simply OBS. An awesome free, open source software project, that among many other things allow to build to custom scenes to be used as input for video conferences. How can it be done in Linux, and more precisely in Debian?

About OBS

I’ve discovered OBS a couple of months ago, while learning how to produce video content for Juntos dese casa initiative. It allows the user to compose scenes with several sources (images, video camera input, mics, etc.) and to manage those scenes like if you were a video recording manager in an studio.

OBS Studio and scenes transition

It’s already available for many platforms and operating systems, and in Debian it’s been as easy as:

$ sudo apt install obs-studio

One of its cool features is that it allows direct streaming of the video generated to several services like YouTube or Twitch. But it also allows to record the output into a video to share or stream later.

But, would it possible to use these scenes in a video conference call? I’ve discovered there are plugins to do it on certain operating systems, but in Linux (and for Debian in my case), it requires some work.

How to make it work?

I would like to thank Henning Jacobs for his detailed post about this topic. It has been very useful for me.

First you need to install v4l2loopback-dkms:

$ sudo apt install v4l2loopback-dkms

To activate it, I have followed Henning recommendations:

$ sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1

Next, you need to compile and install obs-v4l2sink plugin, but first you might need cmake and libobs-dev. So, as usual in Debian :

$ sudo apt install cmake libobs-dev

Once installed, you need to follow these steps:

$ sudo apt install qtbase5-dev
$ git clone --recursive https://github.com/obsproject/obs-studio.git
$ git clone https://github.com/CatxFish/obs-v4l2sink.git
$ cd obs-v4l2sink
$ mkdir build && cd build
$ cmake -DLIBOBS_INCLUDE_DIR="../../obs-studio/libobs" -DCMAKE_INSTALL_PREFIX=/usr ..
$ make -j4
$ sudo make install

Sadly, it seems that the plugin file ends in a wrong folder (/usr/lib/obs-plugins/), and you need to copy it to the right one:

$ sudo cp /usr/lib/obs-plugins/v4l2sink.so /usr/lib/x86_64-linux-gnu/obs-plugins/

Now, if you run OBS, under Tools you would have a new choice: V4L2 Video Output. You need to choose the path to V4L2 device (remember the nr parameter when you activate the v4l2loopback module).

V4L2Sink plugin settings

And now, you should be able to choose your “OBS Cam” in any video conference tool. For example in Jitsi:

OBS output in Jitsi as webcam

If you notice that the output is mirrored, or upside down, don’t worry. As far as I have tested, the rest of the people see it right in their screen.

One of the cool features about using OBS this way is that you can give an internal talk or training, with nice scenes and transitions between cameras, and record it for later consumption:

If you find this article interesting, please, let me know through comments to this post. Thank you!

Transcribe a talk into a blog post

A couple of years ago, Diane Mueller shared with me her experience converting her talks into blog posts or written content to reuse later. And I’ve tried to use it with one of my latest talk to easily turn it into a draft for a blog post. How was the experience?

Getting the audio from the talk

There are several ways to get the audio from your own talks. The one I’ve used the most is to use the audio record functionality in my phone. No extra apps nor hardware needed.

In other cases, if the talk is published on YouTube, you can download it with youtube-dl:

$ youtube-dl <YOUTUBE-VIDEO-URL>

In that case, to get the audio from the mp4 file you can use ffmpeg:

$ ffmpeg -i <VIDEO-FILE-NAME>.mp4 -f mp3 -ab 192000 -vn <AUDIO-FILE-NAME>.mp3

Getting the text from the talk

There are several speech to text services. Some are fully automatic, and some include partial or full human content curation. I’ve decided to try AWS Transcribe to test the outcome of AI transcription.

The process is not very straight forward, but it’s simple enough for me. First, you need to upload the file to an AWS S3 bucket. Once uploaded, you need to copy the file address, because you’ll need it later. It’ll be something like:

s3://<S3-BUCKET-NAME>/<AUDIO-FILE-NAME>.mp3

In AWS Transcribe, create a new job by giving it a name and the S3 address of the file to transcribe:

AWS Transcribe job set up

There are some limitations. For example, it’s not possible to transcribe audio files that last more than 4 hours. Well, I can speak a lot during a talk, but not that much before people get asleep.

As an outcome, it produces a json file where one of the fields (transcript as one of the items in results.transcripts) is the transcription produced. You can see a preview in the job outcome page:

AWS Transcribe job outcome preview

Testing with a real talk

I’ve tried with my talk about Open Source Program Office (OSPO) from last Social Northern California Linux Expo (SCaLE):

Of course, English is not my mother language and it could explain some big discrepancies between my talk and the transcription generated.

What do you thin about these AI solutions? Have you already tried one? What is your experience? Please, feel free to share your answers as comments to this post. Thank you!

Last month conferences: PyConES, LibreCon, Open Source Summit Europe and Liferay Spain Symposium 2017

Jono Bacon at Open Source Summit showing a photo of me!

This post was initially intended to share thoughts about PyConES 2017, but it’s been an stressful events month. So it also contains experiences from the latest conferences I’ve attended/participated/talked: LibreCon, Open Source Summit Europe and Liferay Spain Symposium.

Let’s start the reviews…

PyConES 2017

Continue reading “Last month conferences: PyConES, LibreCon, Open Source Summit Europe and Liferay Spain Symposium 2017”

Public speaking: facing shy and stage fright

Many people don’t believe me when I say I am a shy person and I don’t like public speaking. How have I learned to overcome stage fright?

Everything started in the University (where else?) around late 90’s. I’ve studied Industrial Engineering (nothing related with computer science or programming). I was mad at using HP calculators, participating in some international newsgroups, betatesting new models, and even writing some code. That was my first participation in on-line tech communities.

My colleagues at the university asked me for a training course. That was my first public talk, in front of around 100-200 people in a classroom, using real transparencies. I still remember my legs shivering.

Continue reading “Public speaking: facing shy and stage fright”