How to stream your Raspberry Pi camera (using PiCamera2) as mjpg

Views: 25

A Raspberry Pi with an attached PiCamera can work as a simple surveillance system. A very simple way to stream the images is using MJPEG.

To quickly set up streaming MJPEG, I created a Python script that encodes the images in separate threads. Therefore the whole CPU of the Pi can be utilized. The maximum framerate is equal to what the camera can deliver (47 frames). On a Pi 4 this makes the CPU quite hot, so the script allows to throttle the throughput. 25 FPS is easily possible without any cooling.

Prerequisites

A Raspberry Pi with an attached camera module. I tested this on a Pi 2, 3 and 4, it probably works on a Pi 1 just as well, albeit with a severely reduced framerate.

Install the latest version of the OS:

sudo apt update
sudo apt upgrade

Install necessary python modules

sudo apt install python3-flask python3-libcamera python3-picamera2 python3-opencv

And then run the script

python pystream.py

The stream will be available at port 9000 of your Raspberry Pi: http://your-pi-IP:9000/mjpg

If you run this in a terminal, the process will of course stop once the window is closed, unless you run it using screen. If you haven’t installed it yet:

sudo apt install screen

Start the screen by just typing

screen

A short explanation will show, you can end that with <RETURN>. Anything you start in this will continue to run, even if you close the window.

To get back to a running screen process, just type

screen -r

The script (download here):

#!/usr/bin/env python3

from flask import Flask, Response
import cv2
from picamera2 import Picamera2
import libcamera
import time
import threading

# Creates an mjpg stream from a PiCamera. It's a quick and dirty example how to use your Pi as a surveillance camera
# Uses 2 separate threads to encode the captured image to maximize throughput by using all 4 cores
# The way this "double-buffering" is implemented causes it to only work correctly with one client. 


# Set your desired image size here
# The timing for approximating the frames per second depends largely on this setting
# Larger images means more time needed for processing
CAM_X = 1280
CAM_Y = 720


# Change this, depending on the orientation of your camera module
CAM_HFLIP = False
CAM_VFLIP = False

# Change this to control the usage and therefore temperature of your Pi. On my Pi 4 a setting of 25 FPS
# results in CPU usage of roughly 40% and no temperature throttling (no additional cooling here)
# Set to 0 to impose no restrictions (on my Pi 4 this results in ~47 FPS (maximum of my PiCamera model 2), on my Pi 2 ~17 FPS)
MAX_FPS = 25

# Flask is our "webserver"
# The URL to the mjpg stream is http://my.server:WEB_PORT/mjpg
WEB_PORT = 9000
app = Flask(__name__)

# Keeps all data for the various threads in one place
class MgtData(object):
    stop_tasks = False            # If set, all threads should stop their processing
    frame1_has_new_data = False   # Is being set when frame1 receives a new buffer to encode
    lock1 = False                 # Is being set when frame1 receives a new buffer to encode, and cleared when encoding is done
    frame2_has_new_data = False   # Same for frame 2
    lock2 = False

    img_buffer1 = None            # Receives the image as a byte array for frame 1
    img_buffer2 = None            # ... for frame 2
    encoded_frame1 = None         # Stores the JPG-encoded image for frame 1
    encoded_frame2 = None         # ... for frame 2

    # If there is new data available on frame 1, return True
    def frame1_new_data():
        return (MgtData.frame1_has_new_data and not MgtData.lock1)

    # If there is new data available on frame 2, return True
    def frame2_new_data():
        return (MgtData.frame2_has_new_data and not MgtData.lock2)


# Deliver the individual frames to the client
@app.route('/mjpg')
def video_feed():
    generated_data  = gen()
    if (generated_data):
        response = Response(gen(), mimetype='multipart/x-mixed-replace; boundary=frame') 
        response.headers.add("Access-Control-Allow-Origin", "*")
        return response
    return None

# Generate the individual frame data
def gen():
    while not MgtData.stop_tasks:  
        while (not (MgtData.frame1_new_data() or MgtData.frame2_new_data())):
            time.sleep (0.01) # Wait until we have data from one of the encode-threads

        frame = get_frame()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')

# If one of the frames has data already processed, deliver the respective encoded image
def get_frame():
    encoded_frame = None
    if (MgtData.frame1_new_data() or MgtData.frame2_new_data()):
        if (MgtData.frame1_new_data ()):
            encoded_frame = MgtData.encoded_frame1
            MgtData.frame1_has_new_data = False
        elif (MgtData.frame2_new_data ()):
            encoded_frame = MgtData.encoded_frame2
            MgtData.frame2_has_new_data = False
    else:
        print ("Duplicate frame")

    return encoded_frame




# Start the server
def start_webserver():
    try:
        app.run(host='0.0.0.0', port=WEB_PORT, threaded=True, debug=False)
    except Exception as e:
        print(e)

# Definition for the encoding thread for frame 1
def encode1():
    newEncFrame = cv2.imencode('.jpg', MgtData.img_buffer1)[1].tobytes()
    MgtData.encoded_frame1 = newEncFrame
    MgtData.frame1_has_new_data = True
    MgtData.lock1 = False

# Definition for the encoding thread for frame 2
def encode2():
    MgtData.lock2 = True
    MgtData.frame2_has_new_data = True
    newEncFrame = cv2.imencode('.jpg', MgtData.img_buffer2)[1].tobytes()
    MgtData.encoded_frame2 = newEncFrame
    MgtData.lock2 = False



def run_camera():
    # init picamera
    picam2 = Picamera2()

    preview_config = picam2.preview_configuration
    preview_config.size = (CAM_X, CAM_Y)
    preview_config.format = 'RGB888'
    preview_config.transform = libcamera.Transform(hflip=CAM_HFLIP, vflip=CAM_VFLIP)
    preview_config.colour_space = libcamera.ColorSpace.Sycc()
    preview_config.buffer_count = 4 # Looks like 3 is the minimum on my system to get the full 47 FPS my camera is capable of
    preview_config.queue = True
    preview_config.controls = {'FrameRate': MAX_FPS and MAX_FPS or 100}

    try:
        picam2.start()

    except Exception as e:
        print(e)
        print("Is the camera connected correctly?\nYou can use \"libcamea-hello\" or \"rpicam-hello\" to test the camera.")
        exit(1)
    
    fps = 0
    start_time = 0
    framecount = 0
    try:
        start_time = time.time()
        while (not MgtData.stop_tasks):
            if (not (MgtData.frame1_new_data() and MgtData.frame2_new_data())):

                # get image data from camera
                my_img = picam2.capture_array()

                # calculate fps
                framecount += 1
                elapsed_time = float(time.time() - start_time)
                if (elapsed_time > 1):
                    fps = round(framecount/elapsed_time, 1)
                    framecount = 0
                    start_time = time.time()
                    print ("FPS: ", fps)

                # if one of the two frames is available to store new data, copy the captured image to the
                # respective buffer and start the encoding thread
                # At max we have 4 threads: our main thread, flask, encode1 and encode2
                if (not MgtData.frame1_new_data()):
                    MgtData.img_buffer1 = my_img
                    MgtData.frame1_has_new_data = True
                    MgtData.lock1 = True
                    encode_thread1 = threading.Thread(target=encode1, name="encode1")
                    encode_thread1.start()
                elif (not MgtData.frame2_new_data()):
                    MgtData.img_buffer2 = my_img
                    MgtData.frame2_has_new_data = True
                    MgtData.lock2 = True
                    encode_thread2 = threading.Thread(target=encode2, name="encode1")
                    encode_thread2.start()
            time.sleep (0.0005) # No need to constantly poll, cut the CPU some slack
            
    except KeyboardInterrupt as e:
        print(e)
        MgtData.stop_tasks
    finally:
        picam2.close()
        cv2.destroyAllWindows()



def streamon():
    camera_thread = threading.Thread(target= run_camera, name="camera_streamon")
    camera_thread.daemon = False
    camera_thread.start()

    if camera_thread != None and camera_thread.is_alive():
        print('Starting web streaming ...')
        flask_thread = threading.Thread(name='flask_thread',target=start_webserver)
        flask_thread.daemon = True
        flask_thread.start()
    else:
        print('Error starting the stream')

    while not MgtData.stop_tasks:
        time.sleep (25) # Just waiting to end this thread



if __name__ == "__main__":
    try:
        streamon()
    except KeyboardInterrupt:
        pass
    except Exception as e:
        print(e)
    finally:
        print ("Closing...")
        MgtData.stop_tasks = True

Darktable and Nikon Z 30

Views: 175

Unfortunately the NEF format used by the Nikon Z 30 is not supported by Lightroom 6.0 (yes, that was the last version that I could actually buy and sort of own), so I tried Darktable. The Z 30 is not in the list of supported cameras, but the Z 50 is. And Nikon would not have changed the format within the Z cameras, now would they?

Trying to open one of the files results in error messages:

RawSpeed:Unable to find camera in database: 'NIKON CORPORATION' 'NIKON Z 30' '12bit-compressed'
Please consider providing samples on <https://raw.pixls.us/>, thanks!
[rawspeed] (xxx_0059.NEF) bool rawspeed::RawDecoder::checkCameraSupported(const rawspeed::CameraMetaData*, const string&, const string&, const string&), line 170: Camera 'NIKON CORPORATION' 'NIKON Z 30', mode '12bit-compressed' not supported, and not allowed to guess. Sorry.
[temperature] failed to read camera white balance information from `xxx_0059.NEF'!
[temperature] `NIKON CORPORATION NIKON Z 30' color matrix not found for image
[temperature] failed to read camera white balance information from `xxx_0059.NEF'!
[temperature] failed to read camera white balance information from `xxx_0059.NEF'!
[colorin] could not find requested profile `standard color matrix'!

The same goes for 14bit versions.

However, as the Z 50 is supported, I tried just changing the signature of the files. Darktable is fine with that and can load the files.

However, I do not know if that would lead to issues with eg. white balance or other processing which might be different for the Z-cameras. But it works for me and now. To change the signature of all files in the current directory, you can use:

perl -pi -e 's/NIKON Z 30/NIKON Z 50/g' *

As I found out later, it looks like there is already an issue logged in github regarding the Z 30. The solution described in there works for me – it is after all the same general idea: Z 30 and Z 50 files are essentially the same. If you add the definition to your cameras.xml file (which is at /usr/share/darktable/rawspeed on my system), darktable works with Z 30 files as expected.

Korg i3 – power rail repair

Views: 904

I bought a Korg i3 – broken – from ebay, the seller claimed it just stopped working. If that is true, usually the power rail is to blame. And in most cases it’s a dead capacitor. No big deal – usually.

The first board to come out is KLM-1631:

And a check on the components showed me that something let the magic smoke out – LC1. And it was right at the connector to the power supply:

The power supply looked good, so I assumed this was the culprit.

According to the Service manual, LC1 is a DST310, a component I hadn’t seen before, but I had guessed right, a blown capacitor.

I don’t have these and couldn’t find a source – maybe these aren’t produced anymore? But it’s just a power filter, I could probably get away with just connecting CN8A-5 directly to the “A” point. But having capacitive storage in a power rail is always a good thing. And since I’m already in there, I can do at least a decent job.

So I desoldered this LC1, cleaned the board and had to find the traces were eaten away. I additionally removed C92 as these capacitors were just in parallel. Also, maybe C92 was damaged by some surge – it was in the way and costs next to nothing. Only to find out that the trace going to C92 had left us as well.

So, if in doubt – scream and shout. Uhm… these are buffer capacitors, so they just sit between plus and minus, both traces were large enough to just scratch the solder screen off and solder in capacitors. Again, I could probably have gotten away with just one, but the original schematic had two, so I used two:

Yep, it’s ugly. Yep, I probably should have used a ceramic for the smaller one, but seriously, it is the 5V rail, quite likely this is the supply for the logic and won’t have any effect on the sound.

Starting up, it works:

So, I was right to assume a blasted capacitor in the power rail was to blame.

But, as Dr. House kept reminding us: people lie.

Next up: Repairing Power-On-Mute on an i3.

Salesforce Mailchimp integration update 1.93.2

Views: 1516

Unfortunately Mailchimp created a little hickup in the update 1.93.2 (2020-08-11) to their Salesforce integration. When you open the setup “MC Setup” you might be greeted with an error message:

“Read” permission to field “MC4SF__Prompt_For_Full_Sync__c on object “MC4SF__MC_List__c” is not allowed for the current user.

I had seen this on my Developer Edition org at first and contacted the Mailchimp support. Sadly it doesn’t seem to ring enough alarm bells, the final answer was “Since the error has a quick fix and it’s not causing an issue in all accounts, we will be keeping an eye out for further instances to see if there are any similarities.” It seems a bit strange to me, as this is breaking existing installations when the update is applied.

However, there is a solution. In the field level security settings of the field “Prompt For Full Sync” on “MC Audience” you have to give read access to this field for all profiles that will use the Mailchimp integration.

In “Setup” navigate to “Object Manager”

Then select “MC Audience”

And in Fields & Relationships select “Prompt For Full Sync”

Then go to the field level security and grant at least read permissions to all profiles that use the Mailchimp integration

It seems that “read” is all that is required, but I don’t know that for a fact.

I’m just astonished that this newly introduced field seems to be a necessity, yet the managed package does not provide any permissions for this field. Additionally I would have expected Mailchimp to quickly remedy this issue. Luckily for ConcisCons customers, I had seen that problem before they experienced problems with that.

Additionally, it seems that these missing permissions in a managed package is not a first for Mailchimp: the MC Setup page is returning an error

Salesforce Lightning Component – Strange Input Behaviour

Views: 526

I was a bit astonished by one reaction of a lightning:input field: whatever I typed in there, nothing appeared in the field. It wasn’t any event handler that I added, but a mistake on my side.

The component was very simple:

<aura:component access="global" controller="MyController">
<lightning:input value="{!v.filterText}"
label="Filter"
name="myFilter"
placeholder="Filter here...!" /> </aura:component>

The issue was, that the aura:attribute with the name “filterText” was not defined.

Okay, sure, that’s a mistake, but: really? No reaction at all? Not even an error message, only pure silence.

Roland XP-80 – dead with lights on

Views: 1455

So I have a Roland XP-80 with a similar issue as this guy who shows us on Youtube how to replace the battery in the thing. The LCD backlight is on as is the LED on the disk drive. Apart from that – no sign of life.

For me it was an easy fix, as it was quite obvious what had happened:

Capacitor 201 (47µF, 6V) had blown up quite spectacularly. To remedy that, I soldered in one that I had lying around (not exactly the same type, mine has a rating of 25 Volt), and after that all was well.

The stain on the board was quite substantial, so I had to carefully scratch the sod away to try and see if the traces were still in good condition (it always helps if you have access to a service manual / service notes, you can find some at synfo.nl). And yes, all was still connected according to my multi meter.

This would have been a job for a few minutes if I had done that in a sensible manner – but no, I had to start too far down the path.

Remember the simplest steps to find an issue with electronics (I think Louis Rossmann and Dave Jones deserve a bit of credit here):

  • Sniff the board – really, that was a dead give-away here. It stank of exploded component. I just ignored it.
  • Visual inspection – if it doesn’t smell funny, you should still look for the sh*t stain on the board. As you can see above, another dead give-away. I just ignored it.
  • Thou shall check voltages – And this is where I started at. At the power supply. Which was basically fine.

Now, after these steps you can start using your brain, try to check clocks – which was what I was going to do next, but luckily the chip I wanted to probe was close to the exploded capacitor. You can follow signals, measure in circuit, desolder components to measure outside of the circuit and what not.

But only after the sniff test and the visual inspection! Took me the better part of three hours because I didn’t.

Salesforce: rendering of Visual Force pages and context

Views: 491

In a simple, straightforward programming situation, you would assume that functions are executed when they are seemingly called. When you are rendering a Visual Force page in Salesforce, things seem to be a bit mixed up. Though it seems counter-intuitive, it is understandable what happens.

Let’s take a simple example of a Visual Force page with a controller.

Page (atest):

<apex:page controller="Atest">
    Parameter: {!parameter} <br />
    Parameter2: {!parameter2} <br />
    Parameter: {!parameter} <br />
    Parameter2: {!parameter2} <br />
    Parameter: {!parameter} <br />
    Parameter2: {!parameter2} <br>
</apex:page>

Controller:

public class Atest {
    private Integer parameter;

    public Atest () {
        System.debug ('Constructor called.');
        parameter = 1;
    }

    public Integer getParameter () {
        System.debug ('getParameter');
        parameter = parameter + 1;
        return parameter;
    }

    public Integer getParameter2 () {
        System.debug ('getParameter2');
        parameter = parameter + 1;
        return parameter;
    }

    public PageReference nextPage () {
        PageReference page1 = new PageReference('/apex/atest2');
        Blob b = page1.getContentAsPDF();
        PageReference page2 = new PageReference('/apex/atest2');  
        b = page2.getContentAsPDF();
        PageReference page3 = new PageReference('/apex/atest2');  
        return page3;
    }
}

{!parameter} is a reference to the method getParameter () and {!parameter2} a reference to getParameter2 (). Ignore the method nextPage () for now…

So what you might expect is that the Visual Force renderer calls getParameter (). This increases the variable parameter by 1 and returns its new value. We do see the output “Parameter: 2” – as expected. Then the renderer calls getParameter2 (). This again increases the variable parameter by 1 and returns its new value. We do see the output “Parameter2: 3” – as expected.

Next, we want “parameter” again – seemingly a call to the method getParameter (). But now the method is not actually executed; parameter is not increased anymore. We get the outputs “Parameter: 2” and “Parameter2: 3” again and again, no matter how many times we think the method is called.

Now for the second part in the controller above, we need the VF page atest2, which is virtually the same, except that it is a different file. Also, for convenience to call the method in our class, add

<apex:form>
    <apex:commandButton action="{!nextPage}" value="next page" />
</apex:form>

to the page atest. When you now click on “next page”, the page atest2 is created 3 times. To make sure that it is actually rendered, we get the content of the page as a pdf, and the third time, the page is returned as a PageReference. Therefore you are transported to the page atest2.

What you now see is “Parameter: 4” and “Parameter2: 5”. Even though we have rendered the page atest2 3 times, the variable parameter has only been increased by 1 two times.

This is because the renderer works in the same context for all 3 times it renders atest2. getParameter () and getParameter2 () are both called exactly once, and that only, because we are rendering a page in a new context – the call to the method nextPage (). You could even create the pages atest3 and atest4, have them rendered after each other (in one method), and “Parameter” and “Parameter2” will be the same value for each rendered page.

Any output of a method is directly cached, and the method is not called again, except for if you force a rerender – because that is what re-rendering is for. If you know that values will change, you have to instruct Visual Force to do a new rendering.

To get around this, make sure that you create a new context for a new page with changing information. The easiest way to do so is IMHO to create a controller instance for every page that needs to be rendered, and this in turn can be done by having a different controller for the subsequent pages.

tl;dr:

Do not change the information of a variable or method during one rendering context. Have all information be calculated before anything is actually rendered, and do not change information when using a get-method. If information changes due to user input, have the sections that show information based on the input rerendered when necessary.

Overall: within one context the VF renderer will always call any getter only once.

Salesforce – Assets with or without Contact and/or Account

Views: 1632

In test classes it is always a good thing if you are not just going through your code, but also to actually test if it is working according to design. To make your test shine, you would test both kinds of cases, working examples as well as those where you expect an error to occur.

To test a trigger, that ensures Contact and Account to be set on an Asset (as long as certain parameters are fulfilled), I added a test case where the trigger could not work properly.

According to the Apex documentation, an Asset must have Contact and/or Account set, otherwise you will run into an Exception (FIELD_INTEGRITY_EXCEPTION: Every asset needs an account, a contact or both).

Screenshot of Asset Object Reference - AccountId must be set

So, my Test class includes

Asset a = new Asset (Name = 'Test asset');
try {
    insert a;
    System.assert (false, 'Should not reach this, an Asset needs an Account or Contact');
}
catch (Exception e) {
}

However, in the project I work at currrently, this assertion fails, as our Assets can exist without neither Contact nor Account.

And this is where the documentation is plainly wrong, as it depends on the Organization-Wide Defaults for sharing. If you set access to anything except the default (Controlled by Parent), you can create Assets without Account or Contact. So the documentation is wrong 75% of the time, as a setting of “Private”, “Public Read Only” and “Public Read/Write” allows Assets without.

This was a pitfall for me, as my test class worked on one box, but not on another. And soometimes failing tests hint at an issue with the org itself. But only sometimes, most times it is because a developer did not set up the test correctly.

Salesforce API documentation could be better with API versions

Views: 426

Once again I found a nice, no, necessary feature. When searching for RecordType-IDs in Apex, you could either query for them with SOQL (burning away the precious number of statements) OR you could just ask the schema.

With List<RecordTypeInfo> contactRTs = Schema.SObjectType.Contact.getRecordTypeInfos() you can get all available RecordTypes for this Object. With contactRTs[0].getName() you can get the label of the record type.

The label. This may be dependent on the language of the user, so it’s utterly useless in code. But there is also contactRTs[0].getDeveloperName() – yay! However, the documentation never states which is the minimum API needed for a function call, and this is absolute crap. Why not just add a line with the API-version? Otherwise you may get errors, which contradict the documentation.

Yes, I know that the Summer`18 release is not far now, so in this case it means that it was a bit more than a week before I can use this needed feature. But it cost me quite some time – checking if I had a typo, if I misread … and then finally a search for the release notes with this function. With the API version in the docs, this would have been a matter of minutes…

Simple script for setting allowed IP address in tinyproxy

Views: 882

I’m using a dynamic IP address, but for Salesforce development I need a fixed one. As a proxy with a fixed IP address is enough, I use tinyproxy for that. Of course I cannot afford to have an unrestricted open proxy. Now this small script helps me achieve this in a half-automated way. I ssh in to my proxy-server, call this script, and it takes care of producing an appropriate line in the configuration:


#! /bin/bash
#
# Find the ip address of the calling ssh
callerId=`echo $SSH_CONNECTION | awk '{print $1}'`
#
# Find the allow line and set it to the ip address
sudo sed -i "/# set from script/!b;n;cAllow ${callerId}" /etc/tinyproxy.conf
sudo systemctl restart tinyproxy

In order for this to work, you need a marker in your tinyproxy.conf. This script looks for “# set from script”, and replaces the following line with an Allow-entry.