QRDN

quite random domain name

All Articles

#en Articles


tailscale VPN: Access home services by DNS name

Same problem as https://aottr.dev/posts/2024/08/homelab-using-the-same-local-domain-to-access-my-services-via-tailscale-vpn/: I have setup a tailscale mesh VPN, and want to access services in my home network even without their hosts being part of the VPN (think IoT devices without resources or support for the tailscale daemon).

Subnet Routes

The tailscale client (and headscale, the self-hosted bootstrap server) support subnet routers for exactly this purpose: I configure the tailscale client on my NAS to --advertise-routes 192.168.168.0/24,<2003:... prefix from Telekom>::/56 and then use headscale nodes approve-routes on my VPS to allow those routes to be published for all other devices in the VPN.

TODO: whenever my CPE at home (the router which terminates the internet connection using e.g. PPPoE) gets a new IPv6 prefix via Prefix Delegation, update the advertised route and approve it.

Now other devices can tailscale up --accept-routes and the daemon will inject some fancy firewall rules which route traffic to addresses in my home network over the tunnel -- first part done.

home network DNS

My homeserver runs a DynDNS client, which ensures a publicly resolvable DNS name nas.example.com always points to

  • (A) the public IPv4 assigned to the CPE, which can port-forward single ports to the NAS to expose some of its services directly to the internet
  • (AAAA) the globally routable IPv6 it has assigned to itself (read out locally using ip -6 -json a s scope global -deprecated -temporary -mngtmpaddr | jq -r '.[0].addr_info[].local | strings' | head -1)

A static alias record *.nas CNAME nas (expanded: *.nas.example.com CNAME nas.example.com) lets me use any name below that zone for services on the nas, so allowing

  • every service to have its own hostname, so no messing with ports, and a different cookie namespace in web browsers
  • the NAS to get a wildcard certificate via ACME from Let's Encrypt valid for all its services. I made sure the NAS responds to unconfigured hostnames with an error (instead of responding e.g. with the first one in the configuration).

...conflicts with VPN

This setup is not reliable with tailscale subnet routes. When a client tries to access myservice.nas.example.com:

  • If it is in some other network, without VPN, my router firewall blocks the access (except for configured port forwardings) -- as it should
  • If it is in my home network, it can access myservice just fine: It has an IPv6 addresses from the same prefix and thus a local route. Apparently that is always used to connect to the NAS, because the A record would let it use the public IPv4 of the CPE, i.e. take a turn just "outside" my router and get blocked by its firewall.
  • If it is in some other network and connected to the tailscale VPN, the (AAAA) record points it to an IPv6 address in my home network, which gets routed correctly through the tailnet, but the (A) record points to the public IPv4 of my CPE on which the firewall blocks access. Thus if it's possible to access the service depends on the IP stack used, which I cannot influence

solution: IPv6 only internal services

The article linked at the start solves this problem using split DNS: VPN-connected nodes get a different DNS resolver for the Zone(s) used in my home (or any network subnet-routed into the VPN). However this

  1. would require me to maintain that resolving DNS server and
  2. let the tailscale client configure that resolver on all clients whishing to communicate to my home net (MagicDNS is a standard function in tailscale, but by default of in linux clients, probably because it can mess with the local configuration) and
  3. could lead to issues when clients use cached DNS responses from before connecting to the VPN (e.g. Browsers do so: Inspect your Firefox' cache at about:networking#dns, disable using network.dnsCacheEntries and network.dnsCacheExpiration)

So instead I switched my internal services (i.e. those not exposed to the internet with a port forwarding) to IPv6 only: DynDNS on my NAS now updates not only the A and AAAA records for nas.example.com, but also an extra AAAA (but no A) record nas-services.example.com with its (globally routable) IPv6 address. The CNAME *.nas.example.com now (statically) points to nas-services.example.com, leaving all services without an A record thus only resolving to IPv6 which is (the three client cases from before) either locally reachable, globally reachable but firewall-blocked, or routed via VPN and thus reachable.


weechat accent color

Weechat config using an "accent" color for important UI elements which can quickly be changed (e.g. to switch between light and dark terminal color scheme):

/color alias 22 accent
/set buflist.format.buffer_current "${color:,accent}${format_buffer}"
/set weechat.bar.status.color_bg accent
/set weechat.bar.title.color_bg accent
/set weechat.color.separator accent

Thunderbird Lightning: mark all-day events in month/multi-week calendar view

Thunderbird Lightning calendar in month or multi-week view should mark the empty space of days which have an all day event differently.

Write into your ~/.thunderbird/<profilename>/chrome/userChrome.css (maybe need to create the chrome/ directory too):

.calendar-month-day-box-list:has(calendar-month-day-box-item[status=CONFIRMED][allday=true]) {
  background: repeating-linear-gradient(
    -45deg,
    rgba(0, 0, 0, 0.05) 0px 10px,
    rgba(255, 255, 255, 0.05) 10px 20px
  );
}

userChrome.css needs activation: in about:config set toolkit.legacyUserProfileCustomizations.stylesheets to true. I then had to restart thunderbird for this to be effective.

TODO: use the color of the all-day event(s). Maybe could use the elements() function to reference the rendered event in the background definition?

References


Re-Using tasmota smart plugs in home assistant

I use a few smart power plugs which include an ESP microcontroller, a relais, and a power meter. These run tasmota and report to home assistant (compare e.g. https://blog.koehntopp.info/2020/05/20/gosund-and-tasmota/)

Now I don't want to buy new plugs for all devices which I might measure for some time: Eg.g the fridge has a very constant energy usage, measuring it once per season suffices for all I want to know.

When re-using a smart plug for a different device, I'd like Home Assistant not to connect the two measurement histories. Just renaming the device in tasmota web interface is not enough, though: Home Assistant identifies the devices by something else (probably serial number) and would connect the histories / track the rename.

I have to:

  1. Remove the device in Home Assistant (Integrations > Tasmota > Select device > … menu)
  2. Rename via Tasmota Web UI (I change Device Name and Friendly Name, probably only one is enough)
  3. Only now reboot the Device
  4. Seconds later, it shows up in Home Assistant under the new name

Monitoring a freifunk node

I own/operate a freifunk router, running the Freifunk Darmstadt firmware which is based on gluon, which itself is based on OpenWRT.

I'd like to monitor the operation of my node (e.g. wifi client count on 2.4 and 5 GHz), but I want to avoid building my own firmware image. I already have prometheus and grafana running, so all I need is a node_exporter on the device exporting metrics over HTTP. I can install such with opkg install prometheus-node-exporter-lua, but whenever the autoupdater installs a new firmware version (often), I'd have to manually repeat that installation.

As noted in the gluon wiki this can be automated using /etc/rc.local which is run on each boot. Just add the following snippet (before the closing exit 0):

if ! test -e /usr/bin/prometheus-node-exporter-lua ; then
sh -c 'sleep 60 && \
  logger "installing prometheus-node-exporter-lua"
  opkg update
  opkg install prometheus-node-exporter-lua \
    prometheus-node-exporter-lua-nat_traffic \
    prometheus-node-exporter-lua-netstat \
    prometheus-node-exporter-lua-openwrt \
    prometheus-node-exporter-lua-wifi \
    prometheus-node-exporter-lua-wifi_stations
  logger "installing prometheus-node-exporter-lua returned $?"
' &
fi

Waiting 60 seconds after boot hopefully is enough for the WAN network to be up.

Correct operation can be tested by running the file manually: bash /etc/rc.local and checking for the "installing ... returned ..." line with logread -f.


  • #en

The Nightmare Stacks map

I very much enjoyed "The Nightmare Stacks" by Charles Stross, my favourite book of the "Laundry Files" series (as released so far).

The plot being placed in a real place, I used the excellent umap website to draw the places & campaign movements named in the book as an openstreetmap overlay (actually 3 layers: places & leylines from the preparation phase, second the campaign of the Host of Air and Darkness, third the counter-movements by the Urük):

See full screen


checkmk setup details (inside podman container)

Things about CheckMK not (easily) found in the documentation.

Every host is pinged, this service is named "Check_MK".

Run containerized version in podman with podman --cap-add net_raw for ping to work

Install Plugins in containerized Raw edition:

$ podman exec -it -u cmk  checkmk  /bin/bash
OMD[cmk]:~$ cd /tmp
OMD[cmk]:~$ curl https://raw.githubusercontent.com/f-zappa/check_mk_adsl_line/master/adsl_line-1.3.mkp
OMD[cmk]:~$ mkp install adsl_line-1.3.mkp

Monitoring RHEL7

  • has systemd v219, the agent requires 220. remedy: "legacy mode"
  • avoid xinetd? use Agent over SSH
  • ssh known_hosts in checkmk container: run cmk --check $hostname which lets "the correct" ssh ask for accepting the host key. podman -u cmk $containername /bin/ssh $hostname did not suffice in my case

IPv6

checkmk defaults to IPv4 only, change in Settings > Folder "Main" > Dual stack. Selecting "both" will do ping4 and ping6 checks separately.

podman container gets no public route, only link-local (fe80:), thus IPv6 pings to public addresses fail. Apparently an open issue: https://github.com/containers/podman/issues/15850.


Reading BTLE advertisements on Linux

Project context: reading environmental sensor data from card10 into influxDB.

GATT / ESS

epicardium (C Firmware) exports sensor data over Bluetooth GATT ESS (environmental sensing service).

GATT requires pairing, connecting → only 1:1, encrypted

Reading possible with card10_iaq_notify.py from https://git.card10.badge.events.ccc.de/card10/firmware/-/merge_requests/508/diffs

Turns out the eCO2 values computed by the BSEC values are quite bogus, compared to a proper CO2 NDIR sensor: https://dm1cr.de/co2-sensor-vergleich. I replicated these results:

Correlate BME680 with MH-Z19c

Now the idea is to check if the raw resistance values from the BME680 have some better correlation to actual CO2 values, without the intelligence from the proprietary BSEC library.

Epicardium has these values, but doesn't export these over bluetooth ESS. So instead, use a python script to do BT-LE advertisements ourselves: https://codeberg.org/scy/sensible-card10/ And extend it to include the raw resistance value.

reading GAP / BLE advertisements on linux

This is a real problem due to lack of documentation of bluez5. Most online searches turn up the deprecated hcitool and friends. There is a python library bluepy which implements BLE advertisement scanning, but docs are limited too: https://ianharvey.github.io/bluepy-doc/. Some blogpost mentions the backend binary bluepy-helper which can also be talked to directly: https://macchina.io/blog/internet-of-things/communication-with-low-energy-bluetooth-devices-on-linux/

GAP ~= beacons, limited data rate, 1:n, unencrypted. See https://elinux.org/images/3/32/Doing_Bluetooth_Low_Energy_on_Linux.pdf, https://learn.adafruit.com/introduction-to-bluetooth-low-energy?view=all

Check source is sending advertisements (card10 serial, sudo picocom -b 115200 /dev/ttyACM0 had python stacktrace in my case).

bluetooth snooping using bluez' btmon: no filtering, e.g. A2DP clutters.

Better: the reference tool blescan from bluepy:

$ sudo blescan -a -t0
    Device (new): ca:4d:10:XX:XX:XX (public), -70 dBm
        16b Service Data: <1a18c2cccc071b16187e0f00b100016c00860260>
        Complete Local Name: 'card10'

Copy implementation to implement own data parsing:

from bluepy import btle

class ScanPrint(btle.DefaultDelegate):
    def handleDiscovery(self, dev: btle.ScanEntry, isNewDev: bool, isNewData: bool):
        magic, version, temp, hum, press, gr, iaqa, iaq, eco2, battery = \
            struct.unpack('<HHhHLhBHHB', dev.getValue(btle.ScanEntry.SERVICE_DATA_16B))
        # ScanEntry.getValueText() parses bytes into hex-digits, but struct.unpack() needs bytes from getValue

  • #en

Merge multiple PDF pages into one page

Example scan a Credit Card, resulting in a PDF with two pages, each containing the image of one side of the card.

Join the images, put them onto one page (and add some 10px distance between them):

pdfjam input.pdf --nup 1x2 --noautoscale=true --fitpaper=true --delta "10 10" --outfile out.pdf

plasmashell stuck at 100% CPU

My fedora 35 had its plasmashell process stuck at 100% CPU, after some switching between wayland and X11 sessions. When trying to change global keyboard shortcuts (e.g. Alt-F1 for the App-Menu Launcher), it would freeze completely for like 30 seconds, repeatedly. A new user wouldn't have that problem.

The fix was to remove ~/.config/plasmashellrc.


nextcloud on fedora 27, using snap

$ snap install nextcloud
$ sudo snap get nextcloud ports
Key          Value
ports.http   80
ports.https  443
$ sudo snap set nextcloud ports.https=12345 ports.http=12344
$ sudo $(which nextcloud.occ) install <admin-user> <admin-pw>

htc HD2 Startup sequence // howto update Bootloader

Source: https://forum.xda-developers.com/showthread.php?t=1402975

Startup

sequence

  1. HSPL "tri colored-screen"
  2. Bootloader: MAGLDR, cLK
  3. one of
    • recovery: CWM, TWRP, ...
    • OS (Android, Windows, ...)

Magic Buttons at Startup:

  1. HSPL: keep Volume down and press End call/Power once
  2. MAGLDR: keep End call/Power

Update Bootloader

(with broken Start call button):

  • place leoimg.nbh on external SD card (FAT32 partition)
  • start into HSPL
  • confirm update by pressing End call/Power

USB commands:

Source: https://forum.xda-developers.com/showthread.php?t=1292146

  • fastboot flash recovery $recovery.img
  • fastboot oem boot-recovery boot into recovery
  • Show help, which tells how to emulate hardware keys: fastboot oem \?

How to save a whole mediawiki into a git repo

Recently I tried to archive the contents of our old MediaWiki instance to a git repository. Somebody else had already done that, using some scripts from MediaWiki, but these ignored the page histories, saving only the recent version for each page, and offered no possibility to also save uploaded files, especially images.

So I decided to see if I could do better, and found Git-Mediawiki. I had to fiddle a bit, because of it not being installed, only copied, with archlinux' git package, and our broken TLS certificate, but eventually got the import to work:

pacman -Sy perl-mediawiki-api perl-datetime-format-iso8601 perl-lwp-protocol-https
sudo ln -s /usr/share/git/mw-to-git/git-mw.perl /usr/lib/git-core/git-mw
sudo ln -s /usr/share/git/mw-to-git/git-remote-mediawiki.perl /usr/lib/git-core/git-remote-mediawiki
export PERL5LIB=/usr/share/git/mw-to-git/
export PERL_LWP_SSL_VERIFY_HOSTNAME=0  # this makes the whole TLS encryption insecure -- I use it because we don't have a valid certificate, and I don't intend to write back to the wiki
git clone mediawiki::https://wiki.chaos-darmstadt.de/w

The result is a linear history with one commit for each saved revision of any page. There seem to be some bugs, though: - subpages are not exported, like our main pages' subsections "Hauptseite/Header" etc. - some page histories occur twice in the git history, e.g. for page "Mate-Basteln"


Some of the things I did wrong when getting it to work:

Errors

  1. wrong endpoint https://wiki.chaos-darmstadt.de/

    fatal: could not get the list of wiki pages.
    fatal: 'https://wiki.chaos-darmstadt.de/' does not appear to be a mediawiki
    fatal: make sure 'https://wiki.chaos-darmstadt.de//api.php' is a valid page
    fatal: and the SSL certificate is correct.
    fatal: (error 2: 404 Not Found : error occurred when accessing https://wiki.chaos-darmstadt.de//api.php after 1 attempt(s))
    fatal: Could not read ref refs/mediawiki/origin/master
    
  2. Wrong endpoint https://wiki.chaos-darmstadt.de/wiki/

    Searching revisions...
    No previous mediawiki revision found, fetching from beginning.
    Fetching & writing export data by pages...
    Listing pages on remote wiki...
    fatal: could not get the list of wiki pages.
    fatal: 'https://wiki.chaos-darmstadt.de/wiki/' does not appear to be a mediawiki
    fatal: make sure 'https://wiki.chaos-darmstadt.de/wiki//api.php' is a valid page
    fatal: and the SSL certificate is correct.
    fatal: (error 2: Failed to decode JSON returned by https://wiki.chaos-darmstadt.de/wiki//api.php
    Decoding Error:
    malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "<!DOCTYPE html>\n<ht...") at /usr/share/perl5/vendor_perl/MediaWiki/API.pm line 400.
    
    Returned Data:
    <!DOCTYPE html>
    <html lang="de" dir="ltr" class="client-nojs">
    <head>
    <meta charset="UTF-8" />
    <title>Diese Aktion gibt es nicht â Chaos-Darmstadt Wiki</title>
    
    ... (all the HTML from the page) ...
    
    fatal: Could not read ref refs/mediawiki/origin/master
    
  3. PERL5LIB not set

    Klone nach 'wiki' ...
    Can't locate Git/Mediawiki.pm in @INC (you may need to install the Git::Mediawiki module) (@INC contains: /usr/lib/perl5/site_perl /usr/share/perl5/site_perl /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5/core_perl /usr/share/perl5/core_perl .) at /usr/lib/git-core/git-remote-mediawiki line 18.
    BEGIN failed--compilation aborted at /usr/lib/git-core/git-remote-mediawiki line 18.
    
  4. SSL/TLS cert not accepted. I work around by disabling the check, because I know the cert is broken, and I don't intend to write back to the wiki, so in the worst case my export attempt is tampered with. In general, always correctly check your certificates and treat this as an severe error!

    Searching revisions...
    No previous mediawiki revision found, fetching from beginning.
    Fetching & writing export data by pages...
    Listing pages on remote wiki...
    fatal: could not get the list of wiki pages.
    fatal: 'https://wiki.chaos-darmstadt.de/wiki/' does not appear to be a mediawiki
    fatal: make sure 'https://wiki.chaos-darmstadt.de/wiki//api.php' is a valid page
    fatal: and the SSL certificate is correct.
    fatal: (error 2: 500 Can't connect to wiki.chaos-darmstadt.de:443 (certificate verify failed) : error occurred when accessing https://wiki.chaos-darmstadt.de/wiki//api.php after 1 attempt(s))
    fatal: Could not read ref refs/mediawiki/origin/master
    
  5. git remote helper mediawiki not installed (ln -s commands from above):

    fatal: Unable to find remote helper for 'mediawiki'
    

Qt "ModelTest" application

Several sources on the Internet propose the ModelTest application to check whether custom QAbstractItemModel subclasses behave correctly. The application is contained in the official sources (Qt5 and Qt4), and documented (in a sense) in the official wiki. What is not clear, however:

  • The mentioned .pri file is missing, one version can be found on github
  • Although not stated clearly, ModelTest is not something you can drop into your otherwise normal application, comparable to Q_ASSERT statements. Instead, it's a unit test for the QTest framework, i.e. a standalone test of a separate model instance. If your model proxies a complex data structure which can not easily be recreated or mocked for the test, ModelTest is useless.
  • Various versions of ModelTest are in the Qt repos, at KDE, and elsewhere. The ones in the Qt repositories got not many updates recently (last commit from 2016-01-21), so it's not clear if they are maintained at all


debugging ANTLR4 Lexer

grun MyLexer tokens -tokens < testfile

invokes the TestRig on the Lexer spilling out the tokens it recognized. Example stdout:

[@0,0:9='google.com',<4>,1:0]
[@1,10:10='\n',<2>,1:10]
[@2,11:11='\t',<1>,2:0]

Format of this output: A list of tokens, where each is:

[@tokenIndex,startIndex:endIndex="spelling",<tokenId>,?:?]

or (if not default channel)

[@tokenIndex,startIndex:endIndex="spelling",<tokenId>,channel=channelId,lineNo:columnNo]
  • tokenIndex - in the whole output, starting at 0
  • startIndex,endIndex - char/byte? in the input stream
  • spelling - the literal text
  • tokenId - can be found in the .tokens file
  • channelId - index of the channel(?)
  • lineNo,columnNo - line, column of the token start

Tip: append | column -t -s, | less to create a table delimited at , and increase readability (and pass through less for paging).

This does not output "sub-tokens", i.e. only the highest level, not the ones these are assembled from.


grep -f

grep -f patterns.txt is horribly slow. Even for l in cat patterns.txt ; do grep $l ; done is magnitudes faster.



TIL - htop graphs

Today I learned: htop can show value history of the meters in the top bar: Go to configure them, then select any which has a [mode] in square brackets behind it - F4 toggles that mode, including live preview of plain number view, ascii-graphed history, and "LED" which are big ascii-arted numbers.



Lol, Unicode…

Some Unicode findings:

  • 📳 U+1F4F3 : VIBRATION MODE
  • 🚂 U+1F682 : STEAM LOCOMOTIVE
  • 📢 U+1F4E2 : PUBLIC ADDRESS LOUDSPEAKER
  • 🍝 U+1F35D : SPAGHETTI

Migrating a router from DD-WRT to OpenWRT

How to exchange the DD-WRT firmware on your router by OpenWRT.


my Router: TL-WR1043ND Version (DE)v1.0

Old firmware: DD-WRT r19519

New firmware: OpenWRT "Attitude Adjustment" 12.09 Beta 2, filename: openwrt-ar71xx-generic-tl-wr1043nd-v1-squashfs-factory.bin


First revert back to the original TP-Link firmware[1]. Use the Web flash interface and a special image[2]. This took very long in my case, first the 200sec in the browser counted down, and then some minutes passed where nothing happened. Finally the Browser showed a confirmation dialog reading "Update failed", but the router was unresponsive (no ping, no dhcp). I did a powercycle on it, then it was up and running fine, with the original firmware reading:

3.11.5 Build 100427 Rel.61427n
Hardware Version:   
WR1043N v1 00000000

Then I simply put the openwrt /factory/ file from above into the Web flash formular and uploaded. Took some time, then OpenWRT was up and the LuCI Web interface asked to set a root password.


[1]: got this instruction from http://wiki.openwrt.org/doc/howto/generic.flashing#via.original.firmware

[2]: from here http://www.dd-wrt.com/phpBB2/viewtopic.php?t=85237


quick'n'dirty PXE

… on Archlinux:

wget -O /var/ftpd/ipxe.pxe http://releng.archlinux.org/pxeboot/
pacman -S tftp-hpa
sudo in.tftpd -L

append to dnsmasq-config on your dhcp-server:

dhcp-boot=/var/ftpd/ipxe.pxe,,<ip-of-tftp-server>

E135 backlight

Disclaimer: this article is rather old, and does not reflect linux' current state of brightness on the Thinkpad E135

first, it was fixed on maximum. no tool could change it, /sys/class/backlight/acpi_video0/brightness could be written to and changed, but the backlight didn't. Tried the solutions in1, excluding those which would just echo to above /sys/... file, until I found out Kernel parameter acpi_backlight=vendor did the trick.

Now /sys/class/backlight no longer contained an acpi_video0 directory, but a thinkpad_screen directory, that has the same brightness files, which change on using the Fn keys for brightness change.

If I write to them their content changes, but not the backlight, but at least the Fn key combos work.


KMix with multiple identically named master channels

On the Thinkpad Edge E135, ALSA recognizes 2 sound cards (0 and 1), of which #1 is the analog one I want to use and control - but not the default one. alsamixer can control it via selecting the entries in its F6 menu, but still it's not the default one. Creating this1 asound.conf fixes that.

Now KMix shows 2 open tabs (both with the label "HD Audio Generic"), one for each card. Unfortunately it defaults to control the digital output with its control-panel icon, too, and worse, the dialog to change that "Master Channel" gets confused by the identical names and just doesn't display any channel, so you can't change it. Fix: Stop KMix (dunno if that's really neccessary, but probably is), open ~/.kde4/share/config/kmixrc and set the MasterMixer= and MasterMixerDevice= entries in the [Global] section. MasterMixer specifies an equivalent to the ALSA card, but apparently 1-indexed (so here it was "ALSA::HD-Audio_Generic:1" and I set it to "ALSA::HD-Audio_Generic:2"), MasterMixerDevice tells the channel which should be controlled as Master ("IEC958:0" here, changed to "Master:0"). Simply look at the end of the section names and compare with the channels displayed in KMix beyond each tab (i.e. card), if you're unsure what entries to set.

Note: In later versions of KMix (IIRC around 4.15) this stopped to work, so I stayed on that version


  1. /etc/asound.conf:

    defaults.pcm.card 1
    defaults.pcm.device 0
    defaults.ctl.card 1
    


32bit wine-prefixes

tl;dr: WINEARCH=win32 creates a 32bit prefix, to confirm check if $WINEPREFIX/drive_c/Program Files (x86) exists

A wine prefix is a directory containing a lot of the files wine needs to simulate a windows installation. The "standard" wine prefix ~/.wine is used whenever you don't specify a different one in the environment variable $WINEPREFIX. Now like a normal windows installation, the environment (e.g. the provided libraries) may be of either 32 or 64-bitness. Wine copies these libraries into the prefix when it is created, so you have to specify the desired bitness at that point, using the $WINEARCH environment variable. To create a 32bit prefix, just execute wine with $WINEPREFIX set, e.g.

WINEARCH=win32 WINEPREFIX=~/.wine_32bit winecfg

Finally, to check what bitness a given prefix has, just check if there is a directory drive_c/Program Files (x86) in the prefix - if there is, it's a 64bit prefix.