Cloudflare Worker Conditional Reverse Proxy

< 1 min read Cloudflare worker to load content from subdomain/alternate location and replace references to subdomain/alternate location.

addEventListener('fetch', event => {
  var url = new URL(event.request.url);
  if (url.pathname.startsWith('/blog') || url.pathname === '/blog') {
    event.respondWith(handleBlog(event, url));
  } else {

async function handleBlog(event, url) {
  // Load subdomain content / reverse proxy to subdomain
  var originUrl = url.toString().replace('', '');
  // Load content
  let response = await fetch(originUrl);

  // Make sure we only modify text, not images
  let type = response.headers.get("Content-Type") || "";
  if (!type.startsWith("text/")) {
    return response;

  // Read response body
  let text = await response.text();

  // Modify it
  let modified = text.replace(/, "")

  // Return modified response
  return new Response(modified, {
    status: response.status,
    statusText: response.statusText,
    headers: response.headers

Authentication Ideas

6 min read Security, and particularly around authentication, authorization, and auditing, is my favorite part of software development. It’s the stuff that not just lets us be safe, but rather, the reason I like it so much is that it’s by far the broadest part of software development. It requires us to understand the full breadth of the field, from hardware security components like TPM (Trusted Platform Module) chips to IETF standards-based protocols that not only make things safer but open the door to creating simpler, better, and more integrated systems. Historically it may not have always been the case, and security was at odds with other fields like performance and usability. Those problems have long been addressed now, once we realized that thinking of systems as having behavior emergent from the interaction of many systems and focusing on the end problem we’re trying to solve, instead of trying to fit the problem into an isolated individual system.

This new way of thinking gave way to new fields such as Systems Engineering, where the focus moves to focus on discovering the real problems that need to be resolved and identifying the most probable and highest impact failures that can occur. The domain of security, and organizations like (ISC)², OWASP and NIST have recognized and pushed the application of this understanding very well over the years, and standards have changed and become better.

One concrete example of this I think is NIST’s update to NIST 800-171 to remove periodic password change requirements, and drop the password complexity requirements in favor of screening new passwords against a list of commonly used or compromised passwords.

Parallel Foreach async in C#

5 min read Foreach itself is very useful and efficient for most operations. Sometimes special situations arise where high latency in getting data to iterate over, or processing data inside the foreach depends on an operation with very high latency or long processing. This is the case for example with getting paged data from a database to iterate over. The goal is to start getting data from the database, but a chunk of data at a time, since getting one record at a time introduces its own overhead. As the data becomes available, we’d start processing it, while in the background we get more data and feed it into the processor. The processing part would itself be parallel as well, and start processing the next iterator.


My favorite way to do this is with an extension method Stephen Toub wrote many years ago, that accepts a data generator and breaks the data source into partitions allowing for specifying the degree of parallelism and accepts a lambda to execute for each item

using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace Extensions
    public static class Extensions
        public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body)
            return Task.WhenAll(
                from partition in Partitioner.Create(source).GetPartitions(dop)
                select Task.Run(async delegate
                    using (partition)
                        while (partition.MoveNext())
                            await body(partition.Current);

But let’s see what we can do to optimize more…

Observability using Elasticsearch and Neo4j

10 min read Elasticsearch continues to add features at an astonishing rate, and people find really creative ways to use them and enhance it even more. What Neo4j can do is just way too cool to pass on. So we’ll look at how to ingest data with elasticsearch and analyze the data with neo4j. Combining the two helps us achieve some really powerful solutions.

I originally was intrigued by elasticsearch for log aggregation and its capability to instantly aggregate and search over millions of records. We could ship logs from all sorts of data sources like application logs, web server logs (Nginx, IIS). Then we can filter through those logs in Kibana’s Discover, choose the columns we wanted to see for particular use-cases and create saved searches. This immediately made it useful to us, the engineering team. We then use query-based filtering to add restrictions on documents people should access, and with field-level security, we can control which fields they even see inside each document. All of a sudden we have the ability to give our level 1 support real-time visibility into customer issues, without overloading them. On top of this, we add Windows event logs and Syslogs and create some alerts.

Change Desktop Wallpaper with a USB Rubber Ducky

5 min read Rubber Ducky is the “technical” name of a USB device which looks like a USB thumb drive, but presents itself to the computer as a USB keyboard, which, once plugged in, starts typing away pre-recorded keystrokes at super-human speeds. It can bypass many IT safeguards since the computer detects it as a keyboard, and it can easily fool people. Someone might think they’re just plugging in a harmless USB thumb drive, or even just a harmless iPhone charging cable. These can be disguised as just about anything these days, and they can be microscopic.

Monitoring DNS Lookups against your DNS Server using Elasticsearch

7 min read DNS is one of the most fundamental components of today’s internet, one of the simplest technologies at the same time, but it’s also one of the biggest risks, and can be one of the best indicators when something is at risk. There are many cyber attacks that either have their root in DNS, or just use DNS just as a part of the structure of the attack without even thinking about it. Understanding your organization’s DNS traffic is a very solid step in protecting the overall organization. Monitoring DNS lookups with Elasticsearch and analyzing with machine learning can significantly reduce risk around several types of attacks. Here are some of the threats based in DNS and how to know about them.

Keeping Most of the Software on Your Computer Up-To-Date Automatically

5 min read This idea came to me when I was configuring Puppet Server to manage our Windows VMs. One thing that annoyed me for the longest time was having Notepad++ installed everywhere, and getting that popup that an update is available. Disabling the update check is not the solution. I realized that our server deployments are good. We build containers and automate installation of programs with package managers like Chocolatey on Windows and Yum on CentOS, and fully orchestrate servers and services. We can do something similar to keep our own personal computers’ software up-to-date using chocolatey, and without all the orchestration tech we usually needed. Basically just a scheduled task that runs a chocolatey script to keep our stuff up-to-date.

Managing Multi-Cloud Hybrid Environments with DevOps Tools

11 min read I get asked a lot which cloud provider I prefer, even by people that know me well, and the answer I give lately really surprises them I think. My answer is: a combination of all of them and colocated environments. I think when it comes to the major players in the cloud world namely GCP, Azure and AWS, most of the offerings are pretty much on-par with each other. The preference people have really comes from the trust in the company’s management of the environment, price, friendliness and familiarity of interface; and clear visibility into what’s going on. Well sure, but that’s only good as long as you’re only using one, but in many enterprises, there are really good reasons to use a combination of cloud providers, combined with on-premise and colocated hardware. Some of these reasons include risk of availability in extreme cases of global outages from a single provider (and there are examples of this in the past,) and others include specific niche offerings that are only offered by one provider and only used by a specific department. Let’s take a look at tools some of the top technology companies use to manage their cloud-hybrid environment.

The Day Management Left

11 min read The panic and adventure starts on a day like any other, I go to work, we have our daily standup, I write some code, and even answer a few emails. However the next part really surprised me. I get a heads up from a friend in one of our overseas offices alerting me that something really big just happened and to start poking around. As suggested I check around with some of my colleagues in other offices, and surely enough we shortly find out.. our CTO just sent in his…

Building a fast and secure blog – Part 4

8 min read


The most important things in security and performance, more than anything else I’d say is: measure, measure, measure, and when you have all the info, set up automatic measuring and alerts. We’ve already set up scanning for some basic things like malware, but there’s a lot more to scan for.

SSL / Encryption settings / strength

SSL Server Test from Qualys will test the SSL/TLS configuration of your website, and provide you a lot of details about your encryption capabilities, known vulnerabilities and identify misconfigurations. Using the settings configured so far, your grade should be A+, but that can change as new threats are discovered, so you should check this regularly.