Darktable and Nikon Z 30

Visits: 147

Unfortunately the NEF format used by the Nikon Z 30 is not supported by Lightroom 6.0 (yes, that was the last version that I could actually buy and sort of own), so I tried Darktable. The Z 30 is not in the list of supported cameras, but the Z 50 is. And Nikon would not have changed the format within the Z cameras, now would they?

Trying to open one of the files results in error messages:

RawSpeed:Unable to find camera in database: 'NIKON CORPORATION' 'NIKON Z 30' '12bit-compressed'
Please consider providing samples on <https://raw.pixls.us/>, thanks!
[rawspeed] (xxx_0059.NEF) bool rawspeed::RawDecoder::checkCameraSupported(const rawspeed::CameraMetaData*, const string&, const string&, const string&), line 170: Camera 'NIKON CORPORATION' 'NIKON Z 30', mode '12bit-compressed' not supported, and not allowed to guess. Sorry.
[temperature] failed to read camera white balance information from `xxx_0059.NEF'!
[temperature] `NIKON CORPORATION NIKON Z 30' color matrix not found for image
[temperature] failed to read camera white balance information from `xxx_0059.NEF'!
[temperature] failed to read camera white balance information from `xxx_0059.NEF'!
[colorin] could not find requested profile `standard color matrix'!

The same goes for 14bit versions.

However, as the Z 50 is supported, I tried just changing the signature of the files. Darktable is fine with that and can load the files.

However, I do not know if that would lead to issues with eg. white balance or other processing which might be different for the Z-cameras. But it works for me and now. To change the signature of all files in the current directory, you can use:

perl -pi -e 's/NIKON Z 30/NIKON Z 50/g' *

As I found out later, it looks like there is already an issue logged in github regarding the Z 30. The solution described in there works for me – it is after all the same general idea: Z 30 and Z 50 files are essentially the same. If you add the definition to your cameras.xml file (which is at /usr/share/darktable/rawspeed on my system), darktable works with Z 30 files as expected.

Salesforce Mailchimp integration update 1.93.2

Visits: 1466

Unfortunately Mailchimp created a little hickup in the update 1.93.2 (2020-08-11) to their Salesforce integration. When you open the setup “MC Setup” you might be greeted with an error message:

“Read” permission to field “MC4SF__Prompt_For_Full_Sync__c on object “MC4SF__MC_List__c” is not allowed for the current user.

I had seen this on my Developer Edition org at first and contacted the Mailchimp support. Sadly it doesn’t seem to ring enough alarm bells, the final answer was “Since the error has a quick fix and it’s not causing an issue in all accounts, we will be keeping an eye out for further instances to see if there are any similarities.” It seems a bit strange to me, as this is breaking existing installations when the update is applied.

However, there is a solution. In the field level security settings of the field “Prompt For Full Sync” on “MC Audience” you have to give read access to this field for all profiles that will use the Mailchimp integration.

In “Setup” navigate to “Object Manager”

Then select “MC Audience”

And in Fields & Relationships select “Prompt For Full Sync”

Then go to the field level security and grant at least read permissions to all profiles that use the Mailchimp integration

It seems that “read” is all that is required, but I don’t know that for a fact.

I’m just astonished that this newly introduced field seems to be a necessity, yet the managed package does not provide any permissions for this field. Additionally I would have expected Mailchimp to quickly remedy this issue. Luckily for ConcisCons customers, I had seen that problem before they experienced problems with that.

Additionally, it seems that these missing permissions in a managed package is not a first for Mailchimp: the MC Setup page is returning an error

Salesforce Lightning Component – Strange Input Behaviour

Visits: 506

I was a bit astonished by one reaction of a lightning:input field: whatever I typed in there, nothing appeared in the field. It wasn’t any event handler that I added, but a mistake on my side.

The component was very simple:

<aura:component access="global" controller="MyController">
<lightning:input value="{!v.filterText}"
placeholder="Filter here...!" /> </aura:component>

The issue was, that the aura:attribute with the name “filterText” was not defined.

Okay, sure, that’s a mistake, but: really? No reaction at all? Not even an error message, only pure silence.

Roland XP-80 – dead with lights on

Visits: 1047

So I have a Roland XP-80 with a similar issue as this guy who shows us on Youtube how to replace the battery in the thing. The LCD backlight is on as is the LED on the disk drive. Apart from that – no sign of life.

For me it was an easy fix, as it was quite obvious what had happened:

Capacitor 201 (47µF, 6V) had blown up quite spectacularly. To remedy that, I soldered in one that I had lying around (not exactly the same type, mine has a rating of 25 Volt), and after that all was well.

The stain on the board was quite substantial, so I had to carefully scratch the sod away to try and see if the traces were still in good condition (it always helps if you have access to a service manual / service notes, you can find some at synfo.nl). And yes, all was still connected according to my multi meter.

This would have been a job for a few minutes if I had done that in a sensible manner – but no, I had to start too far down the path.

Remember the simplest steps to find an issue with electronics (I think Louis Rossmann and Dave Jones deserve a bit of credit here):

  • Sniff the board – really, that was a dead give-away here. It stank of exploded component. I just ignored it.
  • Visual inspection – if it doesn’t smell funny, you should still look for the sh*t stain on the board. As you can see above, another dead give-away. I just ignored it.
  • Thou shall check voltages – And this is where I started at. At the power supply. Which was basically fine.

Now, after these steps you can start using your brain, try to check clocks – which was what I was going to do next, but luckily the chip I wanted to probe was close to the exploded capacitor. You can follow signals, measure in circuit, desolder components to measure outside of the circuit and what not.

But only after the sniff test and the visual inspection! Took me the better part of three hours because I didn’t.

Salesforce: rendering of Visual Force pages and context

Visits: 484

In a simple, straightforward programming situation, you would assume that functions are executed when they are seemingly called. When you are rendering a Visual Force page in Salesforce, things seem to be a bit mixed up. Though it seems counter-intuitive, it is understandable what happens.

Let’s take a simple example of a Visual Force page with a controller.

Page (atest):

<apex:page controller="Atest">
    Parameter: {!parameter} <br />
    Parameter2: {!parameter2} <br />
    Parameter: {!parameter} <br />
    Parameter2: {!parameter2} <br />
    Parameter: {!parameter} <br />
    Parameter2: {!parameter2} <br>


public class Atest {
    private Integer parameter;

    public Atest () {
        System.debug ('Constructor called.');
        parameter = 1;

    public Integer getParameter () {
        System.debug ('getParameter');
        parameter = parameter + 1;
        return parameter;

    public Integer getParameter2 () {
        System.debug ('getParameter2');
        parameter = parameter + 1;
        return parameter;

    public PageReference nextPage () {
        PageReference page1 = new PageReference('/apex/atest2');
        Blob b = page1.getContentAsPDF();
        PageReference page2 = new PageReference('/apex/atest2');  
        b = page2.getContentAsPDF();
        PageReference page3 = new PageReference('/apex/atest2');  
        return page3;

{!parameter} is a reference to the method getParameter () and {!parameter2} a reference to getParameter2 (). Ignore the method nextPage () for now…

So what you might expect is that the Visual Force renderer calls getParameter (). This increases the variable parameter by 1 and returns its new value. We do see the output “Parameter: 2” – as expected. Then the renderer calls getParameter2 (). This again increases the variable parameter by 1 and returns its new value. We do see the output “Parameter2: 3” – as expected.

Next, we want “parameter” again – seemingly a call to the method getParameter (). But now the method is not actually executed; parameter is not increased anymore. We get the outputs “Parameter: 2” and “Parameter2: 3” again and again, no matter how many times we think the method is called.

Now for the second part in the controller above, we need the VF page atest2, which is virtually the same, except that it is a different file. Also, for convenience to call the method in our class, add

    <apex:commandButton action="{!nextPage}" value="next page" />

to the page atest. When you now click on “next page”, the page atest2 is created 3 times. To make sure that it is actually rendered, we get the content of the page as a pdf, and the third time, the page is returned as a PageReference. Therefore you are transported to the page atest2.

What you now see is “Parameter: 4” and “Parameter2: 5”. Even though we have rendered the page atest2 3 times, the variable parameter has only been increased by 1 two times.

This is because the renderer works in the same context for all 3 times it renders atest2. getParameter () and getParameter2 () are both called exactly once, and that only, because we are rendering a page in a new context – the call to the method nextPage (). You could even create the pages atest3 and atest4, have them rendered after each other (in one method), and “Parameter” and “Parameter2” will be the same value for each rendered page.

Any output of a method is directly cached, and the method is not called again, except for if you force a rerender – because that is what re-rendering is for. If you know that values will change, you have to instruct Visual Force to do a new rendering.

To get around this, make sure that you create a new context for a new page with changing information. The easiest way to do so is IMHO to create a controller instance for every page that needs to be rendered, and this in turn can be done by having a different controller for the subsequent pages.


Do not change the information of a variable or method during one rendering context. Have all information be calculated before anything is actually rendered, and do not change information when using a get-method. If information changes due to user input, have the sections that show information based on the input rerendered when necessary.

Overall: within one context the VF renderer will always call any getter only once.

Salesforce – Assets with or without Contact and/or Account

Visits: 1600

In test classes it is always a good thing if you are not just going through your code, but also to actually test if it is working according to design. To make your test shine, you would test both kinds of cases, working examples as well as those where you expect an error to occur.

To test a trigger, that ensures Contact and Account to be set on an Asset (as long as certain parameters are fulfilled), I added a test case where the trigger could not work properly.

According to the Apex documentation, an Asset must have Contact and/or Account set, otherwise you will run into an Exception (FIELD_INTEGRITY_EXCEPTION: Every asset needs an account, a contact or both).

Screenshot of Asset Object Reference - AccountId must be set

So, my Test class includes

Asset a = new Asset (Name = 'Test asset');
try {
    insert a;
    System.assert (false, 'Should not reach this, an Asset needs an Account or Contact');
catch (Exception e) {

However, in the project I work at currrently, this assertion fails, as our Assets can exist without neither Contact nor Account.

And this is where the documentation is plainly wrong, as it depends on the Organization-Wide Defaults for sharing. If you set access to anything except the default (Controlled by Parent), you can create Assets without Account or Contact. So the documentation is wrong 75% of the time, as a setting of “Private”, “Public Read Only” and “Public Read/Write” allows Assets without.

This was a pitfall for me, as my test class worked on one box, but not on another. And soometimes failing tests hint at an issue with the org itself. But only sometimes, most times it is because a developer did not set up the test correctly.

Salesforce API documentation could be better with API versions

Visits: 423

Once again I found a nice, no, necessary feature. When searching for RecordType-IDs in Apex, you could either query for them with SOQL (burning away the precious number of statements) OR you could just ask the schema.

With List<RecordTypeInfo> contactRTs = Schema.SObjectType.Contact.getRecordTypeInfos() you can get all available RecordTypes for this Object. With contactRTs[0].getName() you can get the label of the record type.

The label. This may be dependent on the language of the user, so it’s utterly useless in code. But there is also contactRTs[0].getDeveloperName() – yay! However, the documentation never states which is the minimum API needed for a function call, and this is absolute crap. Why not just add a line with the API-version? Otherwise you may get errors, which contradict the documentation.

Yes, I know that the Summer`18 release is not far now, so in this case it means that it was a bit more than a week before I can use this needed feature. But it cost me quite some time – checking if I had a typo, if I misread … and then finally a search for the release notes with this function. With the API version in the docs, this would have been a matter of minutes…

Simple script for setting allowed IP address in tinyproxy

Visits: 875

I’m using a dynamic IP address, but for Salesforce development I need a fixed one. As a proxy with a fixed IP address is enough, I use tinyproxy for that. Of course I cannot afford to have an unrestricted open proxy. Now this small script helps me achieve this in a half-automated way. I ssh in to my proxy-server, call this script, and it takes care of producing an appropriate line in the configuration:

#! /bin/bash
# Find the ip address of the calling ssh
callerId=`echo $SSH_CONNECTION | awk '{print $1}'`
# Find the allow line and set it to the ip address
sudo sed -i "/# set from script/!b;n;cAllow ${callerId}" /etc/tinyproxy.conf
sudo systemctl restart tinyproxy

In order for this to work, you need a marker in your tinyproxy.conf. This script looks for “# set from script”, and replaces the following line with an Allow-entry.

mods.curse.com might have leaked data

Visits: 357

Some time ago I played World of Warcraft. Hell, was that addictive. I’m kind of happy that I don’t do it anymore. To be effective in raids I needed addons, which I got from mods.curse.com. And as I am wary to give out my main email address, I once again created one exclusively for this purpose.

For a few months now I find spam being sent to that particular address. It’s not an address you would easily guess such as info@ or contact@ or so, it was very particularly tailored so that it would relate both to curse and to my username on WoW. Mails to it are received in my catch-all inbox, where most of the spam lands.

This particular wave of spam is from “Sale4MichaelKors”, “Deals 4 Jordan”, “Pandora Jewelry” and “UGG Boots”. Stuff which I don’t receive for any other email address.

Now, I have contacted the support, but the last reply was “We can confirm that we are not aware of any mishandling of your user data, nor any incidents that would have exposed it. If you would like more information on Twitch’s Privacy Policy please feel free to read up here.”.

The fact remains that this is the one and only place where I used that email address. Either they have sold my information (I do not believe that) or the data has been stolen. I don’t claim it to be a recent leak. I have a Yahoo address, which was definitely compromised, and I could clearly see that the information obtained had not been abused for quite some time after the data leaked. I assume the same is true in this case. So the leak is not necessarily new.

However, I did not get any specifics what the ‘security team’ had looked into. Maybe they only looked at potential incidents this year, I don’t know.

What I know is: this email address was exclusively used with curse.com, and now I receive spam on it. I draw my conclusions from those facts.


Salesforce: $Component-merge fields behaving unexpectedly

Visits: 332

So I was writing a slightly complicated form validation in Javascript. Therefore I needed access to the values of all fields of an <apex:form>. But worse, some fields were shown conditionally.

So I assumed that the validation would merely need to check whether the fields existed, as a conditional rendering does not add the fields to the DOM (in contrast to hiding them with display: none). So my Visualforce page looked something like this:

<apex:page controller="ConditionalRerenderController">
        Click me
        <apex:inputCheckbox value="{!condition}" id="firstId">
            <apex:actionSupport event="onclick" rerender="conditionalBlock"/>
        <apex:pageBlock id="conditionalBlock">
            <apex:pageBlockSection rendered="{!condition}" columns="1">
                <apex:inputText id="secondId" value="{!stringValue}" /><br/>

The JS to get the value of the text input field would be rather simple

var inputfield = document.getElementById('secondId');
if(inputField != null)

However, the attribute id in Visualforce does not translate directly to the HTML attribute of the same name. To make sure a produced id is in fact unique, Salesforce adds information on the context, practically making it unusable to hard-code the id in javascript.

But there is the global merge field $Component which allows to resolve the produced HTML id. So I expected this to work:

var inputfield = document.getElementById('{!$Component.secondId}');
if(inputfield != null)

But inputfield would always be null, no matter whether the checkbox had been clicked before, rendering the input field.

This is quite a problem, as it turns out this merge field is not re-evaluated outside of the block that gets re-rendered. Instead, the expression always evaluates to an empty string outside of the conditionally rendered block. So you would need to put all Javascript that needs an id of an element within the conditionally rendered block.

Or – maybe more workable – you could use class names as pseudo-IDs. If you would add


to the input-field you can access it with

var elements = document.getElementsByClassName('pseudoId');
if(elements != null && elements.length > 0) {
    var inputField = elements[0];

Have a try, and notice how the id of the text input only appears in the conditionally rendered block.
The page:

<apex:page controller="ConditionalRerenderController">
        Click me
        <apex:inputCheckbox value="{!condition}" id="firstId">
            <apex:actionSupport event="onclick" rerender="conditionalBlock"/>
        firstId: <apex:outputText value="{!$Component.firstId}" /><br/>
        secondId: <apex:outputText value="{!$Component.secondId}" /><br/>

        <apex:pageBlock id="conditionalBlock">
            <apex:pageBlockSection rendered="{!condition}" columns="1">
                <apex:inputText id="secondId" value="{!stringValue}" /><br/>
                secondId: <apex:outputText value="{!$Component.secondId}" /><br/>

The controller:

public class ConditionalRerenderController {
 public boolean condition {get;set;}
 public String stringValue {get;set;}
 public ConditionalRerenderController() {
 this.condition = false;
 this.stringValue = 'empty';