The Long Game: MalChela v4.0

When I started building MalChela, I had a narrow problem to solve. I was doing a lot of malware triage during incident response engagements and I kept reaching for the same scattered set of tools — VirusTotal, some strings extraction, a hash lookup here, a YARA scan there. The workflow existed, but it wasn’t a workflow. It was a series of scripts and context switches dressed up as a process. I wanted something that unified those steps under one roof, ran locally, and felt like a tool a forensicator actually built.

What I got was MalChela. What I didn’t expect was how far it would go.

From Rust Experiment to Field Platform

The first version was modest. A handful of tools with a unifying CLI runner. The goal was simple: hash a malware sample, look it up, pull strings, run YARA. The kind of triage you want to do in the first ten minutes with an unknown file.

Version 2 brought a desktop GUI — MalChelaGUI, built on egui/eframe. It was a genuine step up in accessibility. Analysts who weren’t comfortable in the terminal had a way in. The toolset kept growing.

Version 3 added structure around the investigation itself. Case management landed, giving results somewhere to live across a session. MCP server integration followed, opening up a whole new mode of operation — Claude working alongside the tools, not just alongside me.

But the GUI carried freight. It meant building for a specific platform, managing a Rust GUI dependency chain, and ultimately shipping something that couldn’t easily follow MalChela into its most interesting new use case: the field.

Toby Changed Everything

If you’ve been following Baker Street Forensics for the last few months, you’ve seen the ‘TOBYgotchi‘ project take shape — a Raspberry Pi Zero 2W running Kali Linux, with a Waveshare e-ink display, PiSugar battery, and MalChela pre-installed. Boot it up, it announces itself on the network, and you’re ready to triage. And yes, I am working on making a full build of TOBY available to the public. Stay tuned…

The original field kit vision was: SSH in, run tools from the CLI, pull results. Simple and functional. But the more I used Toby in practice, the more I wanted a better interface — something that worked without a terminal, something a colleague could pick up at a scene without knowing the command syntax.

MalChelaGUI on a Pi Zero 2W is possible but not comfortable. The egui overhead, the X display stack, remote display via VNC — it all works, but it’s friction. What I wanted was something lighter. Something any browser on the network could reach. Something that felt native on an iPad.

That’s what pulled me toward the PWA.

v4.0: The PWA Takes Over

MalChela v4.0 retires the desktop GUI entirely and replaces it with a Progressive Web App as the primary interface.

Every tool that lived in MalChelaGUI has been ported. Most have been improved in the process. The PWA is served locally from the server/ directory — run setup-server.sh once after building the binaries, then start-server.sh on every subsequent boot. Open any browser on the local network and you’re in.

On Toby, this is now part of autostart. Boot the Pi — battery-powered, no cables required — and the server comes up automatically. Connect from your desktop, phone or iPad directly to the PWA. No VNC, no X display overhead, no SSH tunnel. Just a browser pointing at the Pi’s IP.

And here’s the part that makes it genuinely useful in the field: you can upload files directly from whatever device you’re browsing from to the MalChela server. Phone, iPad, laptop — if it has a browser and can reach Toby on the network, it can submit a sample for analysis. The triage station travels with you, and so does the interface.

This is still a work in progress, but the direction is clear: a battery-powered Pi you can drop on a table at a scene, pull out your tablet, and start triaging — no keyboard, no monitor, no additional hardware required.

The field kit I was imagining finally snapped into focus.

REMnux Support

Running MalChela on a REMnux instance? It’s now even easier to load the REMnux configuration tools.yaml.

Configuration > tools.yaml > Load REMnux

then refresh the browser and you’ve got access to all the REMnux CLI tools from within MalChela.

What Else Is New

Simplified case management. This one’s been on my list for a while. In previous versions, case management was tied to starting with a file or folder — you had to know what you were investigating before you could create a case. That’s not how IR actually works. v4.0 breaks that dependency: any result can be saved to a case, and you can create a new case from within a running tool session. All the output, whether from the included cargo tools, or 3rd party add-ons like TShark or Volatility, can be saved to your case. The investigation defines the case, not the other way around.

Improved Volatility support. The Volatility integration got a meaningful UX overhaul. The reference panel has been improved, and output now streams inline within the PWA — no more spawning a separate terminal window to see results, which was one of the more awkward edges of the old GUI experience.

Rapid tool iteration via tools.yaml. The PWA is built around a tools.yaml configuration file that defines the tool manifest. Add a new tool, update the YAML, refresh the interface — done. No recompiling the GUI, no rebuilding the binary for a UI change. This makes extending MalChela considerably faster in practice, and opens the door for community-contributed tool configs down the road.

Try MalChela for Yourself

MalChela v4.0 is available on GitHub now: https://github.com/dwmetz/MalChela/

The CLI isn’t going anywhere. If you’re scripting triage workflows, running MalChela headless in an automated pipeline, or just prefer the terminal, everything you relied on in v3.x is still there. The PWA is the new face of MalChela; the CLI is still the engine.

Want to run MalChela on Windows? You can build it in an Ubuntu instance in WSL. Once you start the server in WSL, the Windows host can access the PWA via http://localhost:8675. (In modern WSL2 Microsoft automatically forwards WSL loopback → Windows localhost.)

If you hit any constraints, open an issue on GitHub. I tried to be as thorough as possible in my testing, but there’s only so much a one-man dev team can do. I’m happy assist in troubleshooting and improve the documentation. Rest assured you won’t get a “well, it works in my environment…”

From QR to Threat Identification in one Click

Recently I introduced Threat Intel Query (tiquery), a multi-source threat intelligence lookup tool. The first iteration expanded on the capability of malhash and enabled for the submission of malware hashes against multiple threat intel sites.

Then yesterday I was targeted with an SMS phishing message. (Note: I don’t know why but I detest the term ‘smishing‘, or any of the other ‘ishings that have been used to describe these tactics.) The message was one of those outstanding traffic violations ‘Final Court Notice’ type scare tactics. Instead of a URL it had a QR code.

This inspired me to add some additional capability to tiquery. I’ve added URL support, that will query against VirusTotal, urlscan.io and Google Safe Browsing. As with all the other sources, API keys are required.

I also added a QR decoding capability, so you can browse to a screenshot of a QR code and tiquery will decode it, and then submit the URL to the Threat Intel lookups.

This was a fairly new sample and the url had been created just hours before.

Version 3.2.1 also adds the ability, when you’re in hash submission, to browse to a file. Only the hash, not the file, gets submitted – it just combines two steps into one.

Support for Recorded Future Tri.ge (researcher account) has also been validated. On that note, if you’re a member at Malpedia and would like to send me an invite, it would be much appreciated.

You can find the full documentation for tiquery including command line syntax in the User Guide within MalChela, or via the online docs here.

MalChela 3.2: More Cowbell? More Intel!

One of the things I value most about the open-source community is that the best improvements to a tool often don’t come from inside it — they come from outside conversations.  A short while back, the author of mlget, xorhex,  reached out and suggested I add more malware retrieval sources to FOSSOR, one of my earlier tools for pulling down samples from threat intel repositories.  It was a generous nudge, and it landed at exactly the right moment.

FOSSOR started as a simple script.  It did one job — grab malware samples from a handful of sources — and it did it well enough.  When I wrote it, I already knew it was a candidate for eventual MalChela integration, but “eventually” had stayed firmly in the future tense.  The message from xorhex gave me the push to actually sit down and do it properly.

The result is tiquery — and it’s become a new centerpiece to MalChela 3.2.

The Pattern I Keep Repeating (Deliberately)

If you’ve followed this blog or the MalChela project for a while, you might notice a recurring arc in how my tools tend to develop.  It goes something like this:

  • Step one:  write a focused script that solves a specific problem.
  • Step two:  that script evolves into a standalone tool as the scope grows.
  • Step three:  the tool finds its permanent home inside MalChela, where it benefits from the broader ecosystem — case management, the GUI, the MCP server integration, the portable workspace.
  • Step four:  When there’s overlap between tools, follow the KISS principle.

FOSSOR was in step one.  The conversation with xorhex accelerated the jump to step three.  What emerged was something more ambitious than just a source expansion — it’s a unified threat intelligence query engine, built from the ground up.

If you’re new to MalChela, it’s a Rust-based malware analysis toolkit built for DFIR practitioners — static analysis, string extraction, YARA rule generation, threat intel lookups, network analysis, and now a unified case management layer tying it all together.  Free, open-source, and built to run anywhere.

Introducing tiquery

tiquery is now the single threat intel tool in MalChela, replacing the retired malhash.  The core idea is straightforward: submit a hash, query multiple sources in parallel, get a clean color-coded summary back.  No waiting for one source to finish before the next one starts.  No manually juggling browser tabs or API wrappers.

Out of the box, tiquery works with eight confirmed sources:

  • VirusTotal
  • MalwareBazaar
  • AlienVault OTX
  • MetaDefender Cloud
  • Hybrid Analysis
  • FileScan.IO
  • Malshare
  • ObjectiveSee (no API key required — queries a locally-cached macOS malware catalogue)

Sources are tiered — free sources and registration-required sources are distinguished in the interface.  If you haven’t configured an API key for a given source, tiquery skips it gracefully rather than throwing an error. This means you can run it easily with whatever keys you have available.

The ObjectiveSee integration deserves a special mention.  It queries the objective-see.org/malware.html catalogue for macOS-specific threats using a locally-cached copy that refreshes every 24 hours, with a stale-cache fallback for offline use.  For anyone doing Mac forensics, this is a meaningful addition — a free, no-key-required check specifically against known macOS malware families.

Tiquery, like FOSSOR, supports batch lookups as well — point it to a .csv or .txt file of hashes and they’ll all be checked in parallel. You can also download samples directly, with MalwareBazaar supported in this release and additional sources on the way – (your vote matters).

What It Looks Like in Practice

The screenshots below show tiquery running in both the CLI and GUI.  In both cases, for any of the matching sources you get a basic classification (malware family, tags, detections,) and direct links to threat intelligence documents about the samples. It’s the perfect jumping off point when you want to leverage community research.

The CLI output is clean and tabular — source abbreviation, status (color-coded FOUND/NOT FOUND), family and tag information, detection count, and a direct reference URL.  Everything you need to make a quick triage decision, no scrolling through API response JSON required. You can run tiquery CLI as a stand-alone, or from within the MalChela CLI menu.

In the GUI, the experience is layered a bit more richly.  You can toggle individual sources on or off, switch between single-hash and bulk lookup modes, download the sample directly from MalwareBazaar, and export results to CSV — all from one interface.  The macOS ObjectiveSee source displays its cache age inline so you always know how fresh the data is.

Both outputs feed into MalChela’s case management system.  Check “Save to Case” in the GUI, and tiquery creates a valid case.json automatically — no separate case creation step needed.

Extended Case Management Across the Toolkit

Speaking of case management — 3.2 extends “Save to Case” support across the full GUI.  File Analyzer, File Miner, and mStrings, all now include the checkbox.  This closes out the last gaps in the case workflow.  Whatever tool you’re using for a given task, if you want to preserve the output in a named case, it’s one click. You no longer have to start with the New Case workflow, however it is recommended if that’s the direction you’re going from the start.

The Strings to YARA tool also gains a companion “Save to YARA Library” checkbox.  Check it, and the generated rule gets copied directly into the project’s yara_rules/ directory alongside being saved to the case. This will automatically make the rule available when you run fileanalyzer on subsequent files.  It’s a small workflow improvement, but one that eliminates a manual copy step I was taking every time anyway. I also added a quick formatter so the special character most often in need of escaping “\” gets handled automatically when the rule is generated.

A Note on malhash

malhash is retired in 3.2.  If you’ve been using it in scripts or workflows, tiquery is its direct replacement — it does everything malhash did and then some.  This is a breaking change in the sense that the binary is gone, but functionally tiquery is a superset, not a lateral move.

malhash served its purpose well.  RIP. tiquery is where that purpose lives now.

Get It

MalChela 3.2 is available now on GitHub.  The full release notes are in the repo. 

Thanks to xorhex for the nudge.  Sometimes the best features start with someone saying “have you thought about…”

A Study in DFIR: Open-Source, Enterprise, and the Art of Analysis

Someone asked me recently how I see DFIR evolving — tooling, automation, and open-source versus enterprise platforms. It’s the kind of question that sounds like a conference panel topic, but the answer is grounded in how work actually gets done. In practice, it isn’t a binary choice. The most effective IR practitioners I’ve worked with use a combination of both commercial and open-source tools, depending on the problem in front of them.

Commercial platforms handle workflow and scale. If you’re running incident response across a large enterprise and need to triage at volume, a solid commercial solution carries weight that a collection of scripts can’t. Aggregation, case management, reporting — those layers matter when you’re briefing a CISO at 2am. Open-source, on the other hand, reacts fast. When a new artifact surfaces — a novel malware family, a Windows update exposing a new forensic data source — the OSS community often has something usable before it shows up on a vendor roadmap.

Where this gets more nuanced is support. Some vendors have excellent support — responsive, technically sharp, and genuinely useful when you’re dealing with something unusual. Others offer little more than a ticketing system and a stale knowledge base. Open-source has the same variability, some projects have highly engaged maintainers who respond quickly to well-written issues, while others are effectively one-person efforts maintained when time allows. Neither model guarantees anything.

Cost follows a similar pattern. Open-source tools remove licensing fees, but they introduce operational overhead — staying current, understanding changes, and troubleshooting issues in your own environment. That cost is real, and it tends to stay invisible until something breaks at the wrong time.

Open-source tools also serve another purpose, they’re a sanity check. When something looks significant during analysis, validating it with an independent tool that parses the same artifact differently adds confidence. This isn’t about distrust — it’s about applying defense-in-depth to analysis itself. If two independently built tools reach the same conclusion, the finding is stronger. If they don’t, that discrepancy is worth investigating before it makes its way into a report.

That ties into a broader issue, treating tools as black boxes. A result comes out, it gets documented, and it ends up in the report with very little scrutiny of how it was produced. Knowing which tools to trust means understanding what they’re actually doing under the hood. The fix is simple, but often ignored, read the release notes. Also, if a tool burned you two years ago, verify whether that’s still true. Vendors iterate. OSS projects iterate. Hanging onto an old assumption is an easy way to miss something useful. And “well-known” doesn’t mean “complete” — every tool has blind spots, and knowing where they are is part of the job.

All of this becomes more relevant when you look at how AI and automation are being positioned in DFIR. There are real capabilities being built, but there’s also a lot of noise. What’s consistently improving is automation around repeatable tasks — collection, parsing, triage. That matters. It allows a competent analyst to move faster and cover more ground. What hasn’t changed is the part that requires judgment: understanding context, recognizing when something doesn’t fit, and knowing what question to ask next. That intuition comes from experience, and there’s no real shortcut for it.

One shift that’s been more interesting is how many practitioners are now building their own tools. The barrier to entry has dropped. You don’t need to be a full-time software engineer to create something useful. If you understand the artifacts and can write a working parser in Python or Rust, you can build something that solves a real problem. That kind of domain-specific tooling — built by someone who understands what they’re looking for — is often more effective than a generic solution adapted to fit a forensic use case. It also reinforces the same principle, the more you understand the tooling, the less you rely on it blindly.

Use what works. Know its limits. Validate across tools when it matters. Don’t let a bad experience with an old version close a door. And write it down when something’s worth sharing.