“I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose.” — Sherlock Holmes, A Study in Scarlet
There’s a particular kind of frustration that I suspect a lot of researchers know well: you’re in the middle of something, an analysis, a blog post, a deck, and you know you’ve written or read or bookmarked something about this before. But where? Which device? What did you call it?
For me, that somewhere spans three places: folders on my computer (best described as neuro-spicy organized chaos – rabbit holes with rabbit holes), Apple Notes full of quick thoughts, and a Safari Reading List of unread articles related to a variety of subject areas. Good information lives in all three. Finding it quickly is another matter.
So I built something to fix that, and yes by built I do mean vibe-coding played a major factor. But this was something I was doing for fun. Don’t hate the game. Adapt how you play.
Mind Palace is a personal knowledge search engine for macOS. It runs locally — no cloud, no API calls, no data leaving your machine — and indexes your Desktop folders, Apple Notes, and Safari Reading List into a single, fast, full-text search interface. The UI leans into the Holmes aesthetic too. Categories are called Rooms and the home screen panels are illustrated like scenes from 221B Baker Street. I had a lot of fun with that part.
When you’re navigating on the main Mac device – the folder headings have 🚪 links, which opens the respective folder in Finder. Room with doors and doors within doors.
You run it, open a browser, and you’ve got one search box that reaches across everything. It also installs as a PWA, so I have it pinned on my iPad and phone. I can trigger a rescan from any of those devices and the search index updates on my Mac in the background. After the success I had with updating the interface for MalChela to a PWA, it had me thinking of other use cases I could adapt for myself.
The name felt obvious. The Baker Street brand has always leaned into the Holmes aesthetic, and the Mind Palace is my attempt to build something like that for the chaotic archive that is my actual working brain. I had a pretty clear picture of what I wanted: something that would index the three places I actually put things, serve a clean search UI I could use from any device on my network, and stay entirely local. Simple enough in concept.
The reality was a little more interesting. Apple Notes in particular has a lot going on under the hood. Some notes live in a local SQLite database. Others exist only in iCloud-synced folders and require a completely different access strategy. Getting both to work reliably, and fast, meant going down some rabbit holes I didn’t fully anticipate when I started. But that’s usually where the interesting engineering happens.
The UI came together in a single HTML file, no framework, no build step, just vanilla JavaScript served by a lightweight Python HTTP server. That decision paid off immediately when I wanted to use it from my iPad: install the PWA, point it at my Mac’s local IP, done. The processing stays on the Mac; the tablet is just a display.
Coming Soon
Mind Palace is not released yet, but it’s close. The Python reference implementation is working well in daily use, and I’ll be pushing it to GitHub soon. It came together pretty quickly so I want to do a little more stress testing on it before that happens. The longer-term goal is a proper native Mac app, a menu bar utility with an embedded server, and an iOS companion that discovers it automatically on your local network. That’s a future chapter, or even a novella.
For now, if you want to know when it drops, the best place to watch is my GitHub profile at github.com/dwmetz. I’ll also post here and on Bluesky when it’s live.
If you’ve got a Notes library, a Reading List, and a bunch of folders that hold more institutional knowledge than you can reliably remember, this was built for exactly that situation. More to come.
MalChela v4.1 is out today, and the headline is something I’ve been wanting to tackle for a while: dedicated Mac malware analysis tooling. If you’ve been following the channel or the blog, you know MalChela started as a triage-first toolkit aimed at the kinds of samples that show up in Windows-centric IR engagements. That coverage was never the full picture. Mac malware — infostealers, adware loaders, APT implants — has become too common to treat as an edge case. v4.1 is the start at addressing that directly.
New Tools: Mac Analysis
Three new tools land in this release, each targeting a different layer of Mac binary analysis. All three are available in the PWA under the Mac Analysis heading, accessible via CLI shortcodes, and included in the release scripts.
codesign_check (cs)
macOS code signatures are one of the first things worth checking on any suspicious binary. codesign_check accepts either an .app bundle or a bare Mach-O and reports signature status (Developer-signed, Ad-hoc, or Unsigned), Bundle ID, Team ID, and entitlement presence — including the get-task-allow flag that marks debug and development builds. It also verifies the _CodeSignature/ and CodeResources directory structure.
Indicators flagged: missing CMS blob, CS_ADHOC flag, absent Team ID, and get-task-allow entitlement. FileMiner now suggests Code Sign Check automatically for all Mach-O files in a scan. (Planned feature: adding a certificate revocation check).
plist_analyzer (pa)
Parses macOS .plist files and .app bundle Info.plist for static malware indicators. This release includes four new detections:
LSUIElement / NSUIElement = true — app runs as a hidden background agent with no Dock icon. Both the modern LSUIElement and legacy NSUIElement (integer 1) forms are now detected, covering older macOS malware that used the pre-Sierra key.
NSAllowsArbitraryLoads = true — App Transport Security disabled, a classic C2 channel indicator.
CFBundleURLTypes with custom URL schemes — flags non-standard scheme registrations used for persistence or inter-process communication.
CFBundleSignature = ‘????’ — no creator code set, common in unsigned tools and malware.
macho_info (mo)
Parses thin and fat/universal Mach-O binaries and reports: architecture, linked libraries, section entropy, symbol status, RPATH entries, __PAGEZERO integrity, and PIE/ASLR flags.
This release also adds deprecated crypto library detection: macho_info now flags linkage against end-of-life OpenSSL libraries (libcrypto.0.9.8, libssl.0.9.8, and variants). There’s no legitimate reason for a modern binary to link these — flag it and investigate further.
mStrings — Mac Tuning
Running mStrings against Mach-O binaries previously produced a lot of noise: ObjC runtime stubs, Swift mangled symbols, and Apple system library paths that add volume without adding signal. A new is_objc_swift_noise() filter suppresses these categories:
_objc_* runtime stubs
@_* import stubs (including @_LSSharedFileList*, which was previously surfacing as false-positive filesystem IOCs)
Swift mangled symbols (_$s*, _T0, swift_*)
Apple system dylib paths under /System/Library/Frameworks/ and /usr/lib/swift/
ObjC type encoding strings
Alongside the noise filter, 12 new Mac-specific MITRE detection rules have been added to detections.yaml:
Rule
Technique
MacLaunchAgentDaemonPersistence
T1543.001
MacLoginItemPersistence
T1547.015
MacShellProfileInjection
T1546.004
MacCronJobPersistence
T1053.003
MacDylibInjection
T1574.006
MacKeychainAccess
T1555.001
MacAppleScriptExecution
T1059.002
MacUnixShellExecution
T1059.004
MacPrivilegeEscalation
T1548.004
MacSystemDiscovery
T1082
MacSandboxVMEvasion
T1497.001
MacSensitiveFileAccess
T1005
Mac path extraction also gets a dedicated regex: re_mac_path captures filesystem IOCs in Mac-style paths (.sh, .py, .dylib, .plist, .app, .pkg, .command) under /Users/, /Library/, /tmp/, and related directories.
FileMiner — Session Persistence
FileMiner scan results now persist across browser close and refresh. Results, the analyzed path, and the set of executed sub-tools survive in localStorage automatically. On each scan, a session.json is also written server-side to saved_output/fileminer/ — or to the active case folder under saved_output/cases/<case>/fileminer/ when Save to Case is checked.
A Load Session button in the FileMiner options bar opens a file browser pre-navigated to the correct session directory. Selecting a session.json restores the full results table and re-populates the path input. Like the previous GUI, fileminer now tracks tool runs for suggested tools (green indicates tool report already generated).
MalChela v4.1 is available now on GitHub. As I said this is just the start of the macOS malware support. I’m looking forward to taking this much further.
As one tends to do on Saturday mornings with coffee in hand, I was reviewing two samples that were attributed to the LunaStealer / LunaGrabber family. Originally I was validating that tiquery was working with the MCP configuration, however what started as a quick TI check turned into a full static analysis session — and it gave me a good opportunity to put the MalChela MCP integration through its paces in a real workflow. This post walks through how that investigation unfolded, what the pivot points were, and what we found at the bottom of the rabbit hole.
The Setup
If you haven’t seen the MalChela MCP plugin before, the short version is this: MalChela is a Rust-based malware analysis toolkit I’ve been building for a while — tools like tiquery, fileanalyzer, mstrings, and others. The MCP server exposes all of those tools to Claude Desktop natively, so instead of dropping to the terminal for every command, I can run analysis steps conversationally and let Claude help interpret the results and suggest next moves.
This is not replacing the terminal — it’s augmenting it. The pivot decisions still come from the analyst. But having a reasoning layer that can look at mstrings output and say “that SetDllDirectoryW + GetTempPathW combination is staging behavior, and here’s the ATT&CK mapping” is genuinely useful when you’re moving fast.
Both samples were sitting in a folder on my Desktop. I had SHA-256 hashes. Let’s go.
Phase 1: Threat Intelligence Query
First move is always TI. The MalChela tiquery tool hits MalwareBazaar, VirusTotal, Hybrid Analysis, MetaDefender, and Triage simultaneously and returns a combined results matrix. Two calls, two answers.
Sample 1 (4f3b8971...) came back confirmed LunaStealer across all five sources. First seen 2025-12-01. Original filename sdas.exe. VT tagged it trojan.generickdq/python — already telling us something about the build.
Sample 2 (d4f57b42...) was more interesting. MalwareBazaar returned both LunaGrabber and LunaStealer tags. Triage clustered it with BlankGrabber, GlassWorm, IcedID, and Luca-Stealer. The original filename was loader.exe. That’s a different kind of name than sdas.exe. One sounds like a throwaway test artifact. The other sounds deliberate.
The TI results alone suggested these weren’t just two copies of the same thing. They were potentially different components of the same campaign.
Phase 2: Static PE Analysis
fileanalyzer and mstrings on both samples.
The first thing that jumped out was the imphash — f3c0dbc597607baa2ea891bc3a114b19 — identical on both. Same section layout, same section sizes, same import count (146), same 7 PE sections including the .fptable section that PyInstaller uses for its frozen module table. These two samples were compiled from the same PyInstaller loader template with different payloads bundled inside.
But the entropy diverged sharply. Sample 1 (sdas.exe) came in at 3.9 — low, even for a PyInstaller bundle. Sample 2 (loader.exe) was 6.9 — high, indicating the embedded payload is compressed or encrypted more aggressively. Combined with the file size difference (47 MB vs 22 MB), this was the first signal that what was inside each bundle was meaningfully different.
mstrings gave us 22–23 ATT&CK-mapped detections across both samples — largely the same set: IsDebuggerPresent, QueryPerformanceCounter, SetDllDirectoryW, GetTempPathW, ExpandEnvironmentStringsW, OpenProcessToken. Standard infostealer staging behavior. Tcl_CreateThread showed up in both, which is a PyInstaller artifact from bundling Python with Tkinter. The VT python family tag made more sense in context.
Phase 3: PyInstaller Extraction
Both samples were extracted with pyinstxtractor-ng. This is where the two samples started to diverge clearly.
Sample 1 entry point: sdas.pyc — Python 3.13, 112 files in the CArchive, 752 modules in the PYZ archive.
The name cleaner.pyc inside a file called loader.exe is a tell. That’s not a stealer payload name. That’s something that runs after.
The bundled library sets were nearly identical between both — requests, requests_toolbelt, Cryptodome, cryptography, psutil, PIL, sqlite3, win32 — same stealer framework. But Sample 2 had a unique addition: a l.js reference (mapped to T1059 — Command and Scripting Interpreter). A JavaScript component not present in the December build. The OpenSSL versions also differed: Sample 1 bundled libcrypto-3.dll (OpenSSL 3.x), Sample 2 had libcrypto-1_1.dll (OpenSSL 1.1). Different build environments, roughly one month apart.
At this point the working theory was solid: Sample 1 is a standalone stealer. Sample 2 is a later-generation dropper/installer with an updated payload and additional capability.
Phase 4: Bytecode Decompilation
decompile3 couldn’t handle Python 3.11 or 3.13 bytecode. That’s a known limitation. pycdc (Decompyle++) handles both.
sdas.pyc decompiled cleanly — the import stack made the capability set immediately obvious:
from win32crypt import CryptUnprotectData
from Cryptodome.Cipher import AES
from PIL import Image, ImageGrab
from requests_toolbelt.multipart.encoder import MultipartEncoder
import sqlite3
CryptUnprotectData for browser master key decryption. AES for the decryption itself. ImageGrab for screenshots. MultipartEncoder for structured exfiltration. Classic infostealer, nothing surprising.
cleaner.pyc was a different story. The decompiler output opened with this:
Heavy obfuscation — byte arrays used to reconstruct eval, getattr, and __import__ at runtime so none of those strings appear in plain text. The approach is designed to evade static string detection. Decode the byte arrays and you get:
Standard Python malware obfuscation. But buried further down in the decompile output was a large binary blob — a bytes literal starting with \xfd7zXZ. That’s the LZMA magic header.
Phase 5: LZMA Stage 2 Extraction
The blob was located at offset 0x17d4 in the pyc file. Extract and decompress it:
import lzma
blob = open('cleaner.pyc', 'rb').read()
idx = blob.find(b'\xfd7zXZ')
decompressed = lzma.decompress(blob[idx:])
# → 102,923 bytes
One important detail: the decompression is wrapped in a try/except LZMAError block with os._exit(0) on failure. If the decompression fails — as it would in some emulated sandbox environments — the process exits silently with no error. That’s the anti-sandbox mechanism.
The decompressed payload was another obfuscated Python source using a custom alphabet substitution encoding. The final execution chain was compile() + exec(). Decoding the full stage 2 revealed everything:
This is the live Discord injection payload. The stage 2 pulls this JavaScript file from GitHub and injects it into the Discord desktop client’s core module, persisting across restarts.
The capability set from stage 2:
Anti-analysis checks on startup: process blacklist (~30 entries including wireshark, processhacker, vboxservice, ollydbg, x96dbg, pestudio), MAC address blacklist (80+ VM prefixes), HWID blacklist, IP blacklist, username/PC name blacklists
Discord token theft from all three release channels (stable, canary, PTB)
Browser credential theft across 20+ Chromium and non-Chromium browsers
Roblox session cookie harvesting (.ROBLOSECURITY= targeting with API validation)
The ping delay is a simple trick — the 3-second wait lets the process fully exit before the delete fires, so the file removes itself cleanly after execution.
What MalChela + MCP Added to This Workflow
The honest answer is: speed and synthesis.
tiquery hitting five TI sources in one call versus five separate browser tabs or CLI invocations is a meaningful time saving, but that’s the surface benefit. The deeper value showed up in the mstrings step — getting ATT&CK-mapped output with technique IDs alongside the raw strings meant the behavioral picture came together faster than manually correlating imports against the ATT&CK matrix.
The MCP integration meant each of those steps — TI query, PE analysis, string extraction — could happen within the same conversation context. Claude could see the fileanalyzer output and the mstrings output together and note that the entropy difference between the two samples was significant, that the identical imphash meant shared loader infrastructure, that the staging imports in mstrings were consistent with the exfil approach suggested by the TI tags. That cross-tool synthesis is where the integration earns its keep.
The parts that still required manual work: pyinstxtractor-ng, pycdc, the LZMA extraction, and decoding the stage 2. Those are terminal steps on the Mac.
If you’re running MalChela in your environment and want to reproduce the TI query steps, the MalChela MCP plugin source is on GitHub at github.com/dwmetz/MalChela. Questions or additions to the IOC list — find me on the usual channels.
When I started building MalChela, I had a narrow problem to solve. I was doing a lot of malware triage during incident response engagements and I kept reaching for the same scattered set of tools — VirusTotal, some strings extraction, a hash lookup here, a YARA scan there. The workflow existed, but it wasn’t a workflow. It was a series of scripts and context switches dressed up as a process. I wanted something that unified those steps under one roof, ran locally, and felt like a tool a forensicator actually built.
What I got was MalChela. What I didn’t expect was how far it would go.
From Rust Experiment to Field Platform
The first version was modest. A handful of tools with a unifying CLI runner. The goal was simple: hash a malware sample, look it up, pull strings, run YARA. The kind of triage you want to do in the first ten minutes with an unknown file.
Version 2 brought a desktop GUI — MalChelaGUI, built on egui/eframe. It was a genuine step up in accessibility. Analysts who weren’t comfortable in the terminal had a way in. The toolset kept growing.
Version 3 added structure around the investigation itself. Case management landed, giving results somewhere to live across a session. MCP server integration followed, opening up a whole new mode of operation — Claude working alongside the tools, not just alongside me.
But the GUI carried freight. It meant building for a specific platform, managing a Rust GUI dependency chain, and ultimately shipping something that couldn’t easily follow MalChela into its most interesting new use case: the field.
Toby Changed Everything
If you’ve been following Baker Street Forensics for the last few months, you’ve seen the ‘TOBYgotchi‘ project take shape — a Raspberry Pi Zero 2W running Kali Linux, with a Waveshare e-ink display, PiSugar battery, and MalChela pre-installed. Boot it up, it announces itself on the network, and you’re ready to triage. And yes, I am working on making a full build of TOBY available to the public. Stay tuned…
The original field kit vision was: SSH in, run tools from the CLI, pull results. Simple and functional. But the more I used Toby in practice, the more I wanted a better interface — something that worked without a terminal, something a colleague could pick up at a scene without knowing the command syntax.
MalChelaGUI on a Pi Zero 2W is possible but not comfortable. The egui overhead, the X display stack, remote display via VNC — it all works, but it’s friction. What I wanted was something lighter. Something any browser on the network could reach. Something that felt native on an iPad.
That’s what pulled me toward the PWA.
v4.0: The PWA Takes Over
MalChela v4.0 retires the desktop GUI entirely and replaces it with a Progressive Web App as the primary interface.
Every tool that lived in MalChelaGUI has been ported. Most have been improved in the process. The PWA is served locally from the server/ directory — run setup-server.sh once after building the binaries, then start-server.sh on every subsequent boot. Open any browser on the local network and you’re in.
On Toby, this is now part of autostart. Boot the Pi — battery-powered, no cables required — and the server comes up automatically. Connect from your desktop, phone or iPad directly to the PWA. No VNC, no X display overhead, no SSH tunnel. Just a browser pointing at the Pi’s IP.
And here’s the part that makes it genuinely useful in the field: you can upload files directly from whatever device you’re browsing from to the MalChela server. Phone, iPad, laptop — if it has a browser and can reach Toby on the network, it can submit a sample for analysis. The triage station travels with you, and so does the interface.
This is still a work in progress, but the direction is clear: a battery-powered Pi you can drop on a table at a scene, pull out your tablet, and start triaging — no keyboard, no monitor, no additional hardware required.
The field kit I was imagining finally snapped into focus.
REMnux Support
Running MalChela on a REMnux instance? It’s now even easier to load the REMnux configuration tools.yaml.
Configuration > tools.yaml > Load REMnux
then refresh the browser and you’ve got access to all the REMnux CLI tools from within MalChela.
What Else Is New
Simplified case management. This one’s been on my list for a while. In previous versions, case management was tied to starting with a file or folder — you had to know what you were investigating before you could create a case. That’s not how IR actually works. v4.0 breaks that dependency: any result can be saved to a case, and you can create a new case from within a running tool session. All the output, whether from the included cargo tools, or 3rd party add-ons like TShark or Volatility, can be saved to your case. The investigation defines the case, not the other way around.
Improved Volatility support. The Volatility integration got a meaningful UX overhaul. The reference panel has been improved, and output now streams inline within the PWA — no more spawning a separate terminal window to see results, which was one of the more awkward edges of the old GUI experience.
Rapid tool iteration via tools.yaml. The PWA is built around a tools.yaml configuration file that defines the tool manifest. Add a new tool, update the YAML, refresh the interface — done. No recompiling the GUI, no rebuilding the binary for a UI change. This makes extending MalChela considerably faster in practice, and opens the door for community-contributed tool configs down the road.
The CLI isn’t going anywhere. If you’re scripting triage workflows, running MalChela headless in an automated pipeline, or just prefer the terminal, everything you relied on in v3.x is still there. The PWA is the new face of MalChela; the CLI is still the engine.
Want to run MalChela on Windows? You can build it in an Ubuntu instance in WSL. Once you start the server in WSL, the Windows host can access the PWA via http://localhost:8675. (In modern WSL2 Microsoft automatically forwards WSL loopback → Windows localhost.)
If you hit any constraints, open an issue on GitHub. I tried to be as thorough as possible in my testing, but there’s only so much a one-man dev team can do. I’m happy assist in troubleshooting and improve the documentation. Rest assured you won’t get a “well, it works in my environment…”