Blog December 2019

Personal blog for Mr. Rob Muhlestein for the month of December, 2019.

Tuesday, December 31, 2019, 12:24:37PM

I hope my pull request adding -output and an updated gets added to peg. Here is my fork.

PEG Makefile Suffixes

Tuesday, December 31, 2019, 12:18:36PM

I love that so many blogs about “why you should learn Go in 2020” are so popular today. Fact is, you should have started learning it in 2013 or earlier. That was the year when the creator of Express (the E in MEAN) switched to Go, or when the creator of Node himself started rewriting Node in Go (later Rust). Seriously, if you need to be told to learn Go today you are just not paying attention, at all. It’s a no-brainer. Python, Java, Ruby, and C++ are all legacy languages at this point replaceable entirely by Bash 4+, Go, and Rust. Those are just the facts. You can choose to accept them and move forward, or clutch the past to your bosom in denial.

Monday, December 30, 2019, 5:25:56PM

My tinout package is worth its weight in gold for the stuff I’m doing. It’s like they say, the best shit is always stuff people make because they are “scratching their own itch”. The 400+ test cases for dtime have already uncovered more than a dozen bugs I had. I certainly hope others will find it and benefit as I have.

Monday, December 30, 2019, 5:23:07PM

Back to maintaining a public TODO list. So much going on. Good way to lock things down until they are completed. No more half-finished shit, Rob. You got that! ;)

Monday, December 30, 2019, 3:36:50PM

I’m beginning to realize the gravity of my mistake to publish my code under organizations and groups instead of my own personal account on GitHub. My motivations are correct. It makes no sense to publish stuff for general use under individual accounts, which is why Tom Preston-Werner moved TOML to it’s own org, for example. (Have I mentioned I made the TOML logo lately?)

But the inconvenient truth is that GitHub has fostered this behavior by making private organizations be a real pain in the ass — especially for private stuff, which still costs a ridiculous amount for a large group.

This means that everyone should post all their public code to their personal GitHub by default. I fucking hate this, but stepping back from it and looking at my GitHub personal account it looks like I have done very little even though I have thousands of lines of code and hundreds of hours of work it in.

My mistake was not self-branding enough, and by self-branding I mean publishing something that makes sense to put under an organization under my personal account. So if you are reading this learn from my mistake and make sure to polish and publish all your public stuff on GitHub. Make everything else private (and even just use GitLab for casual storage).

Monday, December 30, 2019, 1:28:52AM

“What is an ocean but a multitude of drops.”

Sunday, December 29, 2019, 1:43:35PM

“Awesome” is frequently not awesome. So I ran across this GitHub topic and read through the lists — including Awesome Go — and I was blown away by how completely underwhelming and frankly just plain bad so much of the code is. That got me to questioning the entire “awesome” approach.

First of all, these lists have all the same flaws of wikis. There is zero curation. They are also not sustainable. Some are already several hundreds of lines long.

Second, I find myself doing dumb things just to meet the arbitrary criteria. For example, my timeme timer is virtually impossible to write a test for because it invokes a subshell and does stuff that simply cannot be tested easily (except by a human). Therefore, I cannot evaluate even though I can get it to work on my own:

Gocover Bork is also seriously broken and yet is one of the required criteria for getting listed. No one bothers to actually look at the test coverage code meaning I could just put some t.Log() calls in there and pass (which is frequently a perfectly acceptable form of testing, as Jim Coplein would scream).

I tend to find myself stressing over whether my stuff will be acceptable on the “awesome” lists rather than just writing really solid good stuff. That has to stop. Otherwise, I will find myself becoming more interested in self-branding and marketing like Sidre and other one-trick-ponies.

Saturday, December 28, 2019, 11:36:36PM

Woot! I did good. I never get tired of catching bugs and helping fix great software that I really believe in. Andrew’s PEG package is definitely one of those things.

Saturday, December 28, 2019, 10:04:19PM

While working on Easy Input-Output Test Specifications in YAML I decided to convert the CommonMark spec JSON file over. Turns out it is extracted by running a JavaScript on the text file to extract the examples:

#!/usr/bin/env node

var fs = require('fs');
var util = require('util');

fs.readFile('spec.txt', 'utf8', function(err, data) {
  if (err) {
    return console.log(err);
  var examples = [];
          examples.push({markdown: x, html: y});
  console.log(util.inspect(examples, { depth: null }));
  console.warn(examples.length + ' examples');

Notice how it only looks for a . at the beginning of a line to cut out the markdown and the HTML from the example:

```````````````````````````````` example

💢 You should see how the official Vim Pandoc plugin just craps the fuck out on those backtick lines and decides everything from here on out doesn’t get formatting anymore, typical example of why consistent code fences is so important. Using a ton of backticks to make a line is just fucking stupid. It shows that the person depending on them for visual style queues doesn’t know how to customize a fucking text editor.

This means that the spec.txt file is the authoritative specification as well as the documentation.

Someone probably thought that was “cute” at one point. I’ve been guilty of it as well. Well it’s not. It’s fucking moronic. By making a home-grown, hybrid documentation file that is also the actual code for the test specification you get something that is simply annoying for others to work with. Had the thing been done entirely in YAML it would have been much easier to work with because it would not have needed any custom parser script just to produce the testable JSON file.

That file also contains some of the worst examples of what is wrong with Markdown:

Ironically this is the same thing that annoyed me when I first discovered the CommonMark project: lack of using any stuff that is normally used when making standards (EBNF, ABNF, BNF).

Saturday, December 28, 2019, 1:10:54PM

During all this year-end cleanup I’ve realized a few things:

Friday, December 27, 2019, 1:27:04PM

GitHub or GitLab in 2020

I’m having a serious battle with myself in my mind about GitHub and GitLab. I swear I revisit this question ever year around this time when doing year-end cleaning. God knows I need to cleanup all the piles of dead code out there and focus on the good stuff that remains. SkilStak has 144 GitHub repos. I have 39. I don’t even want to look at the skilstak-cornelius account (oh wait, I can’t, must have deleted it already).

Here’s the thing: a lot has changed over the last year. Here are some things to note broken down by each:



Since so much of code I develop falls in the package, library, module space — mostly because I think everyone should consider what they are making as a composition of reusable packages and modules — I’m inclined to keep all such things on GitHub where people are more likely to discover and adopt them due to the global dependency on GitHub. This goes for issue submissions as well. Everyone has a GitHub. Some of them have GitLab as well.

One specific example of this is a Vim plugin for PEG I want to create. The Plug plugin manager automatically assumes GitHub. It’s easy to get around, but annoying.

Which leads to the next question: org or no org?

You still have to pay for private repos for GitHub organizations, which makes GitLab a clear winner for such things. This has lead to a rather annoying side-effect: most significant libraries, packages, modules — even entire frameworks — are permanently associated with individual accounts on GitHub, even if the person has nothing to do with maintaining the project any more.

The opposite is true on GitLab where subgroups are only available to groups (and not personal accounts). This has created a very strong influence to make a group your main thing instead of your personal account. Since every single group is under a personal account and groups are unlimited this makes the most sense. In fact, some people will even name their personal account with an underscore (_robmuh) and use their name as their org (robmuh). In my case I already had org names so kept my private name robmuh.

I’m also not too worried about my personal account on GitLab not demonstrating the full extent of my coding prowess because, frankly, very few people will just find it directly on GitLab. That is where GitHub really blows GitLab away. It also supports the idea that one’s best coding work should be under one’s personal account on GitHub where it is likely to be forked and receive stars and such, which (unfortunately) are considered valuable when job hunting, pitching for conferences, etc.

I suppose that is part of the promoting-the-individual appeal of GitHub as “Facebook for developers” and is consistent with Microsoft’s “developers, developers, developers” mantra. To be fair, GitLab is more about “enterprise, enterprise, enterprise” when it comes down to it. Perhaps it is this distinction that should drive all of my decisions about where something should live.

And here are a few other things:

Arg. I definitely overthink things. But in a very real way people have been paying me my whole life to overthink things. So here’s my conclusion:

Imma keep all the following stuff on GitLab:

Everything else on GitHub:

Perhaps the hardest decision is about the repos related to learning content. On the one hand GitHub has great social media card support, but on the other I would likely be posting a link to an actual web site (with better social media card support) that links to the source repo. That way the learning content is discovered through web searches and the project repo is just a resource of it. Often there won’t be a repo at all because all the code will be included in the learning content itself (removing the dependency on any repo and the Internet itself as a PWA).

I think I’ve arrived at a basic rule to follow:

Which, I realize now, can be further boiled down to:

Friday, December 27, 2019, 12:36:58PM

The world can always use better test management libraries. For example, as much as I love Go integrated testing often you want to create a language-agnostic test suite such as that used for CommonMark and more. So far all of them use JSON and there doesn’t appear to be anything written in Go yet. I swear, people who say, “I don’t know what to code” really annoy me at this point. There is so much that is needed out there, even a bunch of simple stuff that everyone needs but no one has yet done.

Anyway, testing a parser and grammar require a lot more testing than the average project. So I guess I’ll write one that is YAML based, making the tests a lot easier to read and follow. I’ll call it tinout.

Friday, December 27, 2019, 11:31:56AM

I really have to start getting more good information out there to combat the bad. It is the entire motivation for everything I do these days. I’m not a very kind person when it comes to blatant, lazy, ignorance — especially when it grandstands or comes in a pompous academic package. Case in point, Functional Pipes in Go is one of the dumbest blog posts I have ever read about Go. Such uninformed shit posted for whatever reason is the reason I stopped reading a bunch of stuff on Medium, and added the entire to my clueless page. In all cases I know people mean well, and I respect them as humans, but so many are just so fucking wrong.

I’m wrong all the time myself. I even leave it here without shame for others to discover and find on their own. I get worried, however, that people will continue to read about this stuff and never challenge the approach, following the herd right off the cliff.

I’m also reminded that the people I really want to hear from do not self-publish much at all. TJ is a solid example of that. Andres Snodgrass (my new hero who has nothing but a LinkedIn and GitHub account) doesn’t as well. Time and again I see that the true authorities on a given matter are also the least likely to have anything published about their findings. In fact, the more they put out there, the higher percentage of bullshit it all contains. That is what I want to fight to improve. If RWX can get more of these authorities to write the better. I know, because one of the smartest people I have ever mentored has nothing written about the brilliant machine learning and other algorithms and proofs he was so obsessed with. He just liked doing it all, not sticking with it long, and certainly hated anything to do with writing about it. This is also very common among autistic people, whom I help a lot.

Friday, December 27, 2019, 3:34:04AM

PEG is so much better than anything lex or yacc ever provided. Turns out PEG has been around since 2004. I don’t think I’ll ever look at context-free grammars in the same way again, they are just so archaic and cumbersome to work with. The choice integration is so nice as is the negative look-ahead, but the absolute best reason to use it is the inline Go coding that updates the central state of the parser. PEG files are almost as easy as EBNF (on which they were designed) to read and way more powerful than ABNF. It is like writing either of these, but being able to write your Go code as well without it looking too cluttered, the absolute best of both worlds.

I finished the little htime utility package in Go I had started to allow command-line and touch friendly date and time entry with durations spans. I still need to polish and finish all the example unit tests before posting it to “Awesome Go” and tweeting about it, but it was just such a joy to use.

I really cannot wait to implement Ezmark entirely in PEG. It should go rather quickly. One of the great advantages of this PEG implementation is that there is a -noast option when memory is tight, otherwise you get a full AST for free which can be easily dumped to JSON. All of that is included. All I need to do is essentially write the specification of the grammar, as I would already have to do.

Thursday, December 26, 2019, 3:52:27PM

Oh my God, I’m so obsessed with PEG now after only playing with it for, like, an hour. In fact, it’s got to be the most exciting discovery since discovering Go itself.

I am already obsessed with natural languages and parsing and syntax so this just puts me over the edge. It’s like everything great with regular expressions plus the ability to associate those matches with running code. The ideas for both markup, programming, and even fictional languages with full grammars are blowing up my mind. I almost want to implement the entire CommonMark standard using PEG just to show how easy it is to make modifications to the grammar. Fuck all this extension support shit. It’s so easy to modify a grammar in PEG that everyone should just be writing in PEG and modifying for their needs. The result is ridiculous more efficient on every level and has native C comparable speeds.

This makes the conversational (CAVA) stuff I was working on take on a whole new meaning. Written in PEG means that new actions and predicates could be added directly with the actions associated with them. That is huge! Why everyone in the conversation / natural language world isn’t freaking out about this boggles my mind. Actually, they are obsessed with AI machine learning for that stuff, but there is an enormous amount of great stuff with deterministic, algorithmic approaches that involve simple, expandable parsing approaches that PEG facilitates.

One thing is for sure, everyone here at SkilStak will be required to create their own language. PEG provides an amazing way to introduce the theory of such things in a practical, hands-on way. Later they can go on to write their own PEG and context-free grammars in whatever CS field they go into. I need to write the VIM plugin for Go PEG first so they can visualize it. There’s just too much fun stuff to do!

Thursday, December 26, 2019, 12:10:53PM

Just read about TJ’s go-naturaldate parser and discovered PEG for the first time. He uses pointlander/peg, which is a port of the C version, (v.s. pidgeon, which is a port of PEG.js).

I looked at pigeon and, no, God no. It is exactly what I would expect a port of a JavaScript library to look like. I mean, pointlander’s blows it away for readability and implementation. The peg engine is even specified using its own syntax which is so fucking cool.

Picking a PEG engine is a big deal since it is something that once you become intimately familiar with will likely prefer forever. The good news is that you have almost effortless opportunities to write conversational and other languages, which I fucking love. The bad news is that it removes you slightly from the implementation.

I can safely say I will never need to parse PEG from JavaScript. It makes zero sense so long as Go exists. It is always better to do that parsing and rendering before something is sent to a browser. I dunno, perhaps if you were creating something for an Electron app, but at that point you really need to think about a Go/Qt implementation instead.

The greatest advantage of PEG is that it almost directly translates, one-to-one, with an ABNF done with an implied priority order. In fact, the peg file is very much like a specification but includes the implementation of the parsing for the rule.

In fact, I may never write another line of ABNF again, just peg files, unless perhaps I need to actually submit an IETF that requires it. That is a little frustrating given the work I have done on ABNF lately, but oh well, at least I discovered this now. Makes the case for following good developers really closely and looking at their design choices, and doing a lot of reading, of course.

On the bright side, peg has negative lookahead assertions, which ABNF does not. So I will be adding a peg implementation of the core ABNF rules I wrote, eventually.

The deeper I look at this the more I see similarities in Pandoc approach. In fact, Bryan Ford is at Berkley with John MacFarlane, which makes me wonder if John took a CS course covering parsers, including context-free and PEG. The fact that he prefers Haskell (used and preferred by academics more than most, despite it having a butt-ugly syntax producing the worst spaghetti code I have ever seen — okay, second to bad Perl).

Ironically having discovered this peg package in Go I could probably rewrite all of Pandoc less than a couple months, which is not my intent at this time. I just want a Pandoc-free parser and I definitely have found one. Once again TJ steers me right. He is such a brilliant artist first and it shows in every technical decision he makes.

I have to think the Universe was somehow involved. I had just come up with an amazing tokenized binary format for compiled Ezmark files that could be parsed as a reliable stream of state events. I was going to add a stream.bin (or some such file along with and index.html. I love the result, but at the end of the day I just need to parse Ezmark without Pandoc dependency and discovering PEG has put my feet back on the ground. In fact, we’ll see how large the parser becomes because I have a feeling it would fit on most small devices with TinyGo supporting Ezmark applications on such.

Thursday, December 26, 2019, 12:07:14PM

So glad I listed Neovim as stupid tech and never used it. Vim 8.2 destroys Neovim in utility and implementation, and govim, an amazing plugin that uses the Vim 8 language server API to allow plugins to be written entirely in Go, is simply not available for Neovim.

No Neovim support in govim

Wednesday, December 25, 2019, 12:36:30PM

Confirmed that no byte-order-mark is needed for UTF-8 (and frankly Microsoft shows how stupid they are adding it to everything with Notepad).

Wednesday, December 25, 2019, 3:12:56AM

Been a fun Christmas Eve, for us. I’m sure all those we shut down didn’t enjoy it as much. (They can fuck right off and know what I’m talking about. If you are reading this, imagine me smiling smugly in your fucking moron face watching your ass get carted off. Mortal Combat, really?! You had to know we’d track you, dumb ass.)

The thing about me is that I’m a target. I get it. In this case, though, we fucking won, and on Christmas Eve no less. Just shows God is on my side. I love my wife. I love my life. No one can take down my joy, no matter how hard they try.

I will say this. Two factor authentication using a phone is the perhaps the most moronic invention since banks that require number and fancy characters in passwords that are limited to 10 characters (fucking morons who don’t understand entropy).

In fact, owning a “smart” phone is one of the dumbest decisions anyone can make for dozens of reasons too exhausting to explain. No one gives a shit. I get it. They are far too “convenient” for people to care about how ridiculously vulnerable they make ever person who owns one. Even Elliot in Mr. Robot has one (which he never fucking would in real life if he were actually interested in security. No signal isn’t enough to save you. The feds would have pinged right to his fucking door in less than a day. It’s moronic writing, but interesting.)

If you lose that “smart” piece of shit your life is over, and if it gets hacked (which is ridiculously easy) your life is equally over if the hacker wants it to be. Seriously, everyone with a “smart” phone is a fucking moron but they just don’t understand why. Those who understand why are motivated for those who have them to never realize why. This isn’t tin-foil hat conspiracy thinking, it’s fucking reality. Only brain-dead fucking morons the seriously uninformed and ignorant own “smart” phones and trust phone-based two-factor authentication. It’s the objective truth, and I have demonstrable evidence backing that fact. It’s why my family have old school phones, and even those are bad enough because of cell tower isolation pinging. At least these are much harder to hack.

Merry Christmas, everyone, but especially those smart and courageous enough to throw their “smart” phones in the fucking trash. They just aren’t worth it. Most never will, to stupid to understand why and too fucking lazy to live without them. If you truly understood what you are giving away, the serious danger you are placing you and your family in by using them, you’d never entertain the thought. But then again, we have millions of people who literally bought a wiretap for their homes and talk to it every day. We are a nation of fucking morons and that ain’t changing any time soon. Those intelligent and diligent enough to understand why will simply dominate the poor assholes who don’t get it. The Nest camera hacks are just the beginning. The IoT driven by corporate greed will destroy modern privacy and civilization. It’s already happening.

Happy New Year!

(This will always be remembered as that Christmas where I fucking owned the morons who thought they could fuck with me. Enjoy prison you fucking felons.)

Tuesday, December 24, 2019, 6:01:17PM

I have decided to keep the strict RUNE standard formally forcing all Ezmark source files to be one of the acknowledged languages according to the Unicode standard. This will indirectly force Ezmark readers and renditions to be fully compatible with existing reader and browser technology. If someone wants to write in anything else they can do so within a fenced block, which is always raw. Then it’s up to the rendering application to scan those raw fenced sections however they want, including code with syntax or just a div that has a different font set for it.

I realize there is one extreme edge case where the byte sequence marking the end of a fenced block might somehow occur in the block, but such is very unlikely. Currently those end markers are:

fenced-end-bt  = LF "```" [CR] LF
fenced-end-ti  = LF "~~~" [CR] LF

I realize this prevents creating a fenced block that contains both, say when illustrating how to write Markdown, but adding another ~~~~ just heads down a rabbit hole, because what if you want that along with the others. Seems best to just have the one alternative and let people deal with the issue at the content level instead. Having an infinite number of possibilities — that have to remember the specific opening sequence mark — is just completely against the goal of the Ezmark project entirely and frankly its just stupid, nobody needs that.

Tuesday, December 24, 2019, 5:27:42PM

While reconfirming use of the PUA planes it occurred to me that the current Ezmark specification depends on RUNE), which are only those language runes acknowledged by Unicode. Go only is at 12.1 and not 13. This makes me question the specificity in my core because what if someone wants to use the RWX for another language that has yet to be codified? Indeed, such would be really fun to enable. Imagine a site entirely in Elvish (Tengwar). I definitely need to broaden it.

I also read this discouragement about what I am envisioning. They say that I should use “standard markup mechanisms, such as those provided by HTML, XML, or other rich text mechanisms” but they all suck! For example, to do this all with XML (without attributes) would increase the output 10 times at least. Pandoc has a JSON AST version, but it isn’t streamable. It is just a node tree, which isn’t the same thing.

In fact, I’m quite confident no one has had the idea of using Unicode to signal state transitions in a stream of Unicode code points (runes). But what if everyone did that? Would that create global Unicode collisions? Is that the very reason that character-based tagging is discouraged? Is it because the infinite possibilities possible with tagging would indeed be finite if everyone did it?

Here are the main reasons I think it would be okay in this case:

  1. Every stream / document has a versioned identifier enabling changes.
  2. The compiled stream of state tokens is secondary to the authoritative
  3. The warning is specifically for “embedding in plain text”.
  4. The is more of a state stream recording than a “plain text” file.
  5. No human should ever look at this file, only scanners.
  6. It really is just a compiled form.
  7. It’s just a compressed form of the same in XML or JSON.
  8. I can create an XML or JSON view of it as well.

I’ve talked myself into it. Now to find a place in the PUA.

Tuesday, December 24, 2019, 4:29:32PM

While I was writing that last entry, which really helped me think out what is needed and it if is possible and practical, I wrote this little utility sub-command that I added to the existing rw tool and can call with rw torunes. It takes stdin or combines the arguments. I used it to produce the rune arrays in the mock-up of the Ezmark state token stream.

I love programming. It is so powerful. A stupid little utility like that just saved me 100s of hours over my lifetime. And the fact that I could just easily add it using the Tab Complete Commander package made it that more rewarding.

I really need to decide where to host all of these permanently. I’m inclined to move them all to the S²OIL project permanently. That way they are part of the justification for a non-profit instead of tied to SkilStak, which is an instance of a S²OIL community, not S²OIL itself. (I must have some serious Autism in my brain somewhere to obsess about such things).

Tuesday, December 24, 2019, 2:24:40PM

Often I will be woken by an idea of something I’ve been brewing over. This morning it was (once again) related to the Ezmark specification. Last night I read a lot of stuff slamming “push” parsing v.s. “pull” parsing specifically related to the SAX and StAX XML Java libraries and I decided to stick with the “pull” parser approach meaning that rather than creating a bunch of parsing events and handling them with a passed collection of callback functions I will create a stream of tokens that can be scanned using a scanner provided by the application using the parsed Ezmark data. The boils down to detected state changes and passing a token to represent that state, for example, at the beginning of a document sending a DocumentBegin token and when seeing a # at the beginning of a line sending an HeadingOneBegin token. Then the application using the scanner becomes mostly just a state engine as it scans through the stream.

That made me remember something I toyed around with several years ago, assigning a Unicode code point from one of the upper planes for each state transition token. I had been doing a lot of work with terminal color support, which is all about escape sequences that are very similar. I thought why not make the entire thing into a stream of data that could be streamed over a socket with very little memory. The same stream could be stored as well, even in a database or to a file system.

That was when it hit me. The parsed stream of state transition tokens could be saved as binary data in the README repo subdirectories. That would mean after a build there would be three tiles:

  1. Original
  2. index.html
  3. README.ez (or maybe .bin)

This would enable the entire RWX global framework to exchange README.ez files and parse them with simple streams of runes (Unicode code points). There is no knowledge of how that stream came to be.

This is huge. It means the beginning of a way to potentially pre-compile Ezmark data into a universal, streamable binary format.

The thing about having these kind of thoughts is that I can’t un-have them. Once I see what is possible I can’t think about anything else. This has to be created, even if it means I have to limp along with the simple Pandoc building I am doing right now. It means that search will have to wait, but I’ve already got all that covered and the JSON rendering created and tested.

The biggest challenge is the attributes involved in any state change. Take heading for example, I could either make a Heading and have an attribute of 1-6 or six different state change tokens. This is because most state changes are not boolean, not on or off. Links get even tricker:

Here is a [link](/some-where/) to parse.

We cannot be sure a [ is any of the following without peeking all the way ahead to see it closed:

But there is stuff between the [ and the ] that will be creating it’s own state changes, including other [ runes.

So the possible state change tokens for the link above would be:

The only way to avoid attributes is to force them all into states. For example, take the following complicated span with attributes:

[skilstak]{#part1 .spy}

This becomes the following rather involved state sequence:

This looks far more complicated than it actually is in code and data, but the overwhelming advantage is how easy the application scanning becomes. There isn’t even a special package needed. It can be done with standard Go scanner or even just a bufio.Reader.

God that’s cool (even if I do say so myself). The goal is simplicity for the user and developer.

Another trick I learned from Go’s scanner code and Go’s amazing token set is to use integers in a certain way for typing them rather than a heavy internal type system (yes, even with interfaces). As far as any application is concerned it’s just a bunch of incoming runes. Sure helper functions for type can be created, but they are really not needed. If is much faster to simply check the value of the token (an integer) for being within a certain range to determine its type. It’s therefore important to leave room in the rune / character set for expansion, a sort of sub-plane within an Ezmark reserved plane of runes.

Tuesday, December 24, 2019, 2:12:49PM

Woke up to this very nice message in my inbox (submitted from a comment box on my blog):

Happy holidays, Mr Rob!

I want to thank you for all of the amazing information & resources that are available here on your site. It is always a pleasure when I come here and see a new blog post. (Even [especially] when the subject matter goes over my head - it shows how much room I have to grow.) I have been visiting your site semi-frequently throughout the year. It is amazing and encouraging to see how passionate you are about getting others to understand technology on a deeper level.

I am a web developer who did not take the traditional college path to get where I am (boot camp grad, please don’t hold that against me). When I visit SkilStak I view it as being like a path of breadcrumbs leading me to the things that are important and that I should be focusing my time & effort on learning. You have my gratitude for so freely sharing such a vast amount of info, often in areas that reveal to me where my tech “blind spots” are so that I may delve deeper. My goal for 2020 is to have my personal website up & running. SkilStak has been a big inspiration on my finally putting in some work towards that goal.

Thank you for creating this amazing space for those of us who would otherwise have never met you in person. Keep up the incredible work!

if celebratesChristmas {
    fmt.Println("Merry Christmas!")
} else {
    fmt.Println("Happy Holidays!")

With gratitude,

< redacted >

PS - You may recall that we had a brief Twitter exchange several months ago. I have been learning to do more on the terminal than ever before, but I am still hoping for and looking forward to an eventual Terminal Master course! Until then I will be following the book you recommend here (Learning the Linux Command Line, complete with Bash v4 - I installed Ubuntu last week and I can’t wait to become a true terminal master!)

Sunday, December 22, 2019, 4:57:43PM

Turns out wasn’t nearly as much work as I thought to create a new core rule set based on Unicode, thanks to Go.

Sunday, December 22, 2019, 3:53:51PM

FYI, Haskell doesn’t have near the Unicode range support that Go has, which makes sense since Rob Pike and company helped define Unicode (as well as Go). At this point it is a no-brainer that I need to get off Pandoc as soon as possible, but I’ll be dependent on it for a while to maintain this as is. In fact, I’m really torn about maintaining dual support for a Pandoc builder or integrated builder once I have the integrated one working. I think I’ll finish the alpha version using Pandoc so I have something to use, then I’ll start on the non-trivial work of really completing a solid Ezmark event-driven API and AST.

Sunday, December 22, 2019, 12:07:20PM

I really love ABNF — even more than regular expressions (which in my old age and wisdom have learned are really, really inefficient for most things). If I could write ABNF all day — every day — I would. That’s fucked up. I mean, just look at this beautiful stuff:

LN          = %x0A ; \n
BLK         = 2LN ; block separator, contextual

VASCII      = %x21-7E
VUC         = %xA1-167F / %x1681-1FFF
            / %x200B-2027 / %x202A-202E / %x2030-205E
            / %x2060-2FFF / %x3001-D7FF / %xF900-FDCF
            / %xFDF0-FFFD / %x10000-1FFFD / %x20000-2FFFD
            / %x30000-3FFFD / %x40000-4FFFD / %x50000-5FFFD
            / %x60000-6FFFD / %x70000-7FFFD / %x80000-8FFFD
            / %x90000-9FFFD / %xA0000-AFFFD / %xB0000-BFFFD
            / %xC0000-CFFFD / %xD0000-DFFFD / %xE0000-EFFFD
RUNE        = VRUNE / SP

In fact,

Sunday, December 22, 2019, 11:31:05AM

Framework for Ezmark event-driven processor (parser) and handler is in place. God I love Go’s interfaces. Most brilliant type system the world has ever known (hyperbole warranted).

Been deciding where to put all this code. I really stress about such decisions (unlike the many who just throw it under their personal GitHub repos and don’t even know GitLab exists). Releasing packages and libraries requires a really solid import path.

As much as I like having code under the skilstak group I realize that is slightly harder for people to type and remember. The S²OIL path seems much better.

I also feel inclined to move more, if not all of my code over to S²OIL because I do hope to make it a full non-profit at some point. I’m hoping (perhaps dreaming) of receiving grants to continue that work and the more code released under the non-profit the more I can demonstrate we are fulfilling our charter. SkilStak then remains a for-profit (which makes me laugh a little given the fact that we’ve never cleared a profit since I put all the money immediately back into the business) and I keep all my mentoring work under that.

Also, when I finally get to the point where I can do conference tours and such it will all be as S²OIL and not SkilStak. I do hope to rally up some help that way. I really think when people see everything working and how to apply it for themselves that it will become more of a thing.

Saturday, December 21, 2019, 9:32:47PM

Reading through the Simple API for XML documentation and really have a wave of 90s nostalgia rush over me. Remember doing all this in Java way back when XML was all the rage. Toying with the name Simple API for Markdown, which would be SAM or SAMD. I could make is for eZmark and call it SAZ. Pffhaha, nah. I’ll just call is the Ezmark API and if someone wants to riff on that they can go for it.

The SAX site has a perfect explanation of why events over trees. I feel like everyone in the Markdown parsing world is just completely ignorant on this specific topic.

It dawned on me that the reason an event-driven solution for Markdown doesn’t exist is all the variations. You have to have a very stable structure so that you can map events to it. I support you could extend it with different events. It’s all food for thought I suppose. The most important thing I have to come up with now is the names of the events for Ezmark.

This also reminded me that I can create a streamed renderer that uses reserved UNICODE characters to create a stream of runes with certain runes representing the event much like terminals do with escape and control characters.

God, I was doing all of this exact thing three years ago and now I’m rather pissed I didn’t follow up with it. I have to. I can’t finish RWX without a solid format and parsing architecture since it is literally the heart of the whole platform.

Saturday, December 21, 2019, 9:10:07PM

Been reminding myself of all the parser design I started three years ago and comparing to the internal parser from Pandoc (in Haskell) and Goldmark and the Go AST parser itself. I was reminded that my original design is event-driven and I’m really sold on the reasons why. It enables people to register event handlers only for the stuff they want and also allows real-time rendering in another goroutine that won’t bog down the parsing itself. In every other parser stuff would block if a real-time rendering scenario was added, but they can’t add that anyway because they need the entire document buffered in memory before they can render (broken old) Markdown anyway.

I seriously blows my mind that there isn’t an event-drive current Markdown parser in existence. The XML SAX parser was and is phenomenal and really set the bar for event-driven parsing. The performance is spectacular while being drop-dead easy to implement. If you want to build an entire node tree, fine, create events for everything. But if you just want all the links then just register handlers for CloseLink and your good without all the wasted additional coding and processing.

In fact, while I will have an AST, it will be entirely separate from the parser. Since I’m not planning on having any kind of (slow) extension mechanism that means I can take one from Rob Pike’s Go parser and put it all in essentially a large case statement. The loss of modularity is made up for by fastest possible parsing times. Then, when the AST is needed, I write an AST node tree package that implements handlers for all the events required to cache and write the node tree.

I have all the respect in the world for the coders who have done these other parsers, but honestly all of these Markdown parsers fucking suck (if I’m being honest). I just have to prove it by showing what I mean. It is the only way they will believe it, but the benchmarks will blow the fucking doors off these others. I know it. Perhaps that will inspire someone to write a full CommonMark event-drive parser someday. I never will. Markdown is too fucking broken. Imma build mine and be fucking done with it. Even my wife understands the motivation to consolidate one-best-ways for everything to get a singular syntax instead of a “do whatever the fuck” syntax Markdown has now.

Saturday, December 21, 2019, 7:16:11PM

Fuck! I just noticed that the Pandoc AST defines indented (as-is) text blocks as CodeBlock the same as fenced code blocks. I understand why this is, but I hate it. There are so many things just plain broken with the original Markdown, and unfortunately even the stuff that got consensus enough for CommonMark are still fucked up. No one needs an infinite number of ways to indicate a “context separation”. The fundamental flaw in Markdown is that it was meant for artistic expression in the source text rather than being treated from the beginning more like a coding language with consistency having the priority. It is totally and completely fucked up ever single Markdown derivative.

This means that Ezmark must happen. It will be the first (and as far as I know) only markup that has consistency, ease of use, and streamable efficient parsing as the overriding priorities. It will be the fastest possible Markdown-inspired syntax in the world — and that is no exaggeration.

All this shit is going to push out stuff for several months, but the benefit will be an actual ABNF-specified Ezmark that I can submit to the IETF and promote as the best alternative to Markdown to have ever existed because it is based on the best of everything to have emerged since that time — particularly Pandoc Markdown, which is brilliant on most counts.

Saturday, December 21, 2019, 11:50:52AM

After catching up one of my senior members who was involved with the early Essential Web design and imagining he sent me this information about The Open Book Project. This is exactly the type of hardware we were envisioning. At some point our efforts need to merge. RWX is a perfect project to run on Open Books devices.

This also solidifies the decision to drop all Pandoc dependency. We can be compatible (AST, etc.) but not dependent. At this point my simple version will get me by until I can finish the parser. I’ve decided (today a least) that I will not be releasing even an alpha of the rw tool and library until the ezmark package is complete including a fully tested Pandoc-compatible AST JSON and HTML renderer. I will not be including support for different readers and writers, there will only be a single input (reader) and two writers, JSON (which is just the MarshalJSON rendering of the AST) and HTML. This way I can leave the heavy lifting of converting content into other formats to the pros on the Pandoc project, which I still really want to support, it just doesn’t meet my needs on the RWX project.

Saturday, December 21, 2019, 11:19:17AM

My Namecheap reminder for renewal came up. It was timely because I am seriously struggling with the decision about Pandoc, Goldmark, or other dependency. The original plan for Essential Web was to use BaseML, which we later changed to Ezmark. I completed the first draft of the ABNF and was starting to write a C parser implementation with the team when we decided to go with auto-wrapping lines instead of supporting either long-lines or multiple lines in a block (like RFC text). That triggered a change to the ABNF and streamed parsing because there was no longer a limit on line numbers.

As much as I appreciate the work that went into the other markdown parsers they all have omissions and straight-up design flaws that are really fucking annoying:

There are certainly times when a lot of options is nice, but this ain’t one of them. The goals of RWX are extreme conformity to the simplest possible standard in order to promote maximum efficiencies including the elimination of centralized search engine dependency. If content is predictable and standardized we can easily parse and search it locally.

I cannot overstate how much the world needs this. CommonMark is amazing and I’m so thankful for it. But its not enough. They are working on adding attributes to it, which will be huge but there are other essential extensions which must be included in order to have a worldwide acceptable standard, MathJax, for example.

The worst part is that the decision makers for CommonMark are being driven by motivations other than those I’ve brought up. They have no problem making it wildly complicated by allowing all kinds of artistic expression rather that identifying one best way to write something.

Bottom line: I need to write the Ezmark parser. There’s no escaping it. I’ve written a couple other parers already and helped with the TOML one (for which I made the logo). But I need to redo the ABNF and formalize the grammar. It will be essentially simplified Pandoc Markdown.

One thing I was not aware of before with BaseML is how good the Pandoc AST is, which can be rendered as JSON. So I will make the Ezmark AST 100% compatible with Pandoc AST so people can do whatever. Ezmark, will therefore be a 100% compatible subset of Pandoc Markdown. This means I can deliver on the alpha using just pandoc for the rendering, but the downside is that I would then be dependent on Pandoc templating (instead of Go templating). Even though that will only affect template/theme creators, I am not sure I want to support both Pandoc templates and Go templates.

This is one of those times when as a developer/designer you stand in front of a major cross-roads that could seriously affect everything to come.

Friday, December 20, 2019, 6:35:34PM

Friday, December 20, 2019, 1:20:56PM

Netlify is so amazing. Just saw this on their feed:

Your first serverless function in one tweet:

  1. Save this as functions/my-first-function.js:
exports.handler = async () => ({
  statusCode: 200,
    body: 'boop',
  1. Deploy to Netlify
  2. Call it at /.netlify/functions/my-first-function

Thursday, December 19, 2019, 4:25:17PM

During a break go to realizing that the HTML renderer for rw has to be considered completely differently. Once again, an example of bigger thinking than just what is convenient for the engineer implementing and distributing it. To explain I have to recap the use case:

If I start rendering to another place for all the HTML then only that directory gets published to the web. This causes a direct problem for the use case because the person must have two URLs for the same content. In fact, I had completely forgotten that the fact they are together is one of the best selling points because anyone can automatically get a copy of the source of the entire page. The default template will even have a button for View Source that links to the file in the same directory.

In other words, mixing Pandoc Markdown and HTML in the same place is preferred for the foreseeable future because the Web isn’t going anywhere even if the README World takes off.

Now as to other formats, it makes sense to render them somewhere separately. HTML is the only format that makes sense to keep with the rest. Not even PDF makes sense since it can be created from one of the graphical browsers easily.

So index.html stays in the directory (and should always be rendered from it).

Thursday, December 19, 2019, 3:34:14PM

Arg, it’s taking everything in me not to combine [@yuin's amazing work on Goldmark]( with the rock solid Pandoc AST. In fact, now that I’ve seen the possibilities it is practically impossible to ignore them. That way all I have to implement is the same CommonMark parser (that fulfills the tests) and includes similar extensibility to what is in Goldmark. Because it would be so directly associated with Pandoc I could even name the package pandoc and include the utilities that exec the pandoc executable externally for those who prefer that. I could setup the architecture to match the best of both internals designs.

Here’s the thing. Pandoc has figured all this shit out as a project and community. They are far more informed than the Goldmark author about the overall need. Better to hitch my wagon to the Pandoc project than Goldmark, but Goldmark is the best thing we have for Markdown in Go at the moment. To do this correctly I actually have to get way better at reading Haskell so I can mirror the internals more directly.

I’m really glad I’m thinking through this whole thing because it just saved me a ton of time. By realizing ultimately that a fully compatible Pandoc implementation in Go is far more desirable in the end — and that Goldmark isn’t there yet — I can live with the original interim plan of building it all with a pandoc executable dependency. I’ll just be sure to build it so that everything else can eventually support a native Go solution, including eventually full support for Go templates instead of Pandoc’s more limited templating.

If I do this right the whole thing could gain favor with both Go template (Hugo) people and Pandoc people. Those are really the two biggest communities to take into consideration. I’ll just start with the Pandoc stuff because it is just so much better than anything in the Go world currently.

Thursday, December 19, 2019, 1:47:41PM

Can’t help but see how short-sighted the designs and directory structures of both Hugo and VuePress are. They are amazing projects with great people on them, but both completely ignore the possibility of alternative output formats.

They are also both built entirely on the assumption of HTML output. To understand the advantages of not doing that one need only look at the value of the Pandoc approach that does not assume any particular output format.

Hugo’s directory structure is particularly ugly and brittle requiring a hugo site init to even setup. In fact, Hugo’s internals are butt-fucking ugly. It might be the first (and only) Go option but at the core it is full of completely stupid design decisions. I have no idea who made them, but what a great example of short-sighted programming. It might take me an eternity to finish anything, but I would rather that then have thousands depend on a something so fucked up and unfixable. The core problem with Hugo is typical engineer thinking. They lose sight of the core use cases and problems they are fixing.

VuePress is fucking elegant in comparison with everything hidden away in the .vuepress directory with nothing to do to get started but start writing content.

Hugo “Let’s make the fastest fucking blog engine there is.”
VuePress “Let’s make the most elegant, easy to use documentation platform (mostly for our own docs).”
Gatsby “Let’s make a blog/document engine that depends on GraphQL?”

Sorry, but those are really short-sighted. The need is much larger than that. All these “bloggers” are really people looking to self-publish, search, and share in the easiest and most secure way possible. That is the need, not just a blog. When you widen the scope to see the real problem you also see needs like the following:

These other tools are amazing and built by amazing people. They are just so focused on specific needs that they really miss out on a lot and the brittle design shows it.

Thursday, December 19, 2019, 12:50:38PM

I love Pandoc, but reading the mailing list full of bugs that resulted from the last upgrade really has me concerned. In particular, the R project uses a different version of Pandoc than the released version. That project is one to watch because they have worked hard to keep what is essentially an external tool part of their core application. This immediately dismisses any inkling I had to embed Pandoc in anything. Having it be an independent, must-install-separately dependency was okay (even with the ridiculous cost to build times). But having any chance of getting out of sync with the official release is just too risky.

It is timely that I would learn of the Goldmark alternative, which brings me to another design dilemma …

Now that the rw tool will be entirely self contained the option of enabling it as a trigger on commit, like most SSGs do, is more available. That means people would simply commit and the build would happen automatically on the host system. But there are several strong disadvantages:

In fact, the only good thing from that anyone has ever said is that multiple people could be collaboratively changing the content however they want and the build process would remain consistent. But here’s the thing:

And then there is the increasing popularity of the idea of hosting your own README server. Sure having a JAMstack host with CDN is nice, but all that becomes much less important when people can actually use clients that manage their own caching of the content locally. The future is not bound to HTML, it will use the Goldmark to render directly to the screen with Qt perhaps even on a device of our own making. That is the future that must govern any design decisions today.

Humm, this does make me rethink the whole render-with-source intermingling. The primary reason for that was to prevent the redundancy that, say, VuePress does by making duplicate copies of everything in the destination, but that should be a decision left up to the renderer (a word that I’m preferring more every day to “writer”).

I’m beginning to realize that it has more to do with allowing renderers to render results with dependencies on content linked to the master source, images and such. There might be a case where some renders must have all the source copied into the .rw/dist subdirectory.

No matter what being able to simple remove the .rw/dist directory to clean up a README makes a lot of sense — even more now that I’ve really been thinking about the entire no-HTML-dependency scenarios that we started with in the Essential Web. It plays to the idea of completely pluggable renderers, which is a strong design advantage. This brings out the practicality of having a .rw/dist/ directory that can further be organized by rendered output type (.rw/dist/html). If we do move the the .rw/dist approach (which is virtually identical to VuePress’s very elegant directory organization) then it would enable those who do want to add the rendering as a trigger, but more importantly, it would keep the main README repo pristinely clean.

By they way, the use of docs/ directory as VuePress assumes is a bad design that assumes the content is sharing a repo with another project. It is always better to have a separate repo for such things — especially with the content is the source.

Thursday, December 19, 2019, 12:05:37PM

Apple fucked up Linux on Mac Minis in the latest firmware. So glad I read this, otherwise I could have destroyed our Mini farm:

I am trying now for days to get Linux installed on a brand new 2018 version of a Mac Mini. I got already a couple problems out of the way including a switch from Ubuntu to Arch, because Ubuntu doesn’t boot anymore with the latest Mac Mini firmware.

Apple is such a disgrace. What a fucking horrible company. Don’t know why I was ever swayed by them. Well, it was the amazing UI, high-end support for audio, and really how amazing Adobe suite is on it. Hell, I’m using this Mac keyboard and it is the best I have ever used. So I suppose I can’t through all of those facts out with “I hate Apple.”

Thursday, December 19, 2019, 2:24:08AM

Lately I’ve been really bothered by all the separate files in Go. It’s not that big of a deal, but especially when writing relatively small library package it just makes so much more sense to put them all in a single file named after the library. Then you have just two other test files, one for the library and another with examples.

Wednesday, December 18, 2019, 5:46:35PM

Just noticed the last season of Silicon Valley is available to buy. Wife and I will be binging it all night tonight (a rare night off since the Linux group meetup looks like it is off this week).

After watching the first scene with Richard pitching the “decentralized Internet” to congress I get goosebumps. Before SV ever addressed the topic my pros and I in the community had started what we called The Essential Web. The entire thing was based on a super simplified version of Markdown (which remains) including testing and designs for hardware, kindle-like readers for a new Web (that later we would call the KnowledgeNet instead).

First EssentialWeb Commit

I’ll never take all that work down even though it has all become defunct and adopted into newer content and libraries.

I realize now that all of that informed every effort to come that I have been honing over the last three years into what is truly at the core — self publishing.

In 2020 it is still impossibly hard for people to create content that can easily be published — and equally importantly — shared and searched without a centralized service. The trend, instead, has been fucking ironic things like Medium (which is just the first Web hosted on a service), and massive centralization and monitoring.

Not only does the “new Internet” need to be de-centralized. It needs to be insanely easy to encrypt any and all content anyone wants putting all of the control into the hands of the content creator as to what is in the content and all of the reader formatting control into the hands of the reader, as the founders of the Web envisioned (and Netscape destroyed).

💢 May Netscape and the greedy assholes who founded it burn in hell. I wish it had a grave to piss on. We had a perfectly working Web browser that was evolving in a better way and motivated by things besides corporate greed. Until that greedy, money grubbing, Zuck-coaching, dishonest, racist, fat fuck Mark Andreesen broke away, stole all the NCSA work, and sold everone’s work for his own gain (just like Bill Gates). He get’s into the Web hall of fame, but he’s one of the shittiest human beings to have ever walked the planet. Compare him to Tim Berners-Lee and their lives and you will see what I mean. I know I am a bad person for not being able to without my anger on this topic, but it is everything wrong with our world and I’m tired of people like this destroying it.

We have an opportunity to use the infrastructure of the Web and Internet, but to completely and totally decentralize it and fucking deflate the Google search engines and such. We just have a monstrous battle to get anyone to even give it attention given all the money beyond maintaining the broken system we have. Still, I will make. Me and the few visionaries who have seen this solution from the beginning. The Cypherpunks made GPG knowing that it would be a tough sell to the mainstream. Still they made it.

(And what is it with December and releasing stuff around my birthday? Probably some psychological reason I don’t understand.)

Wednesday, December 18, 2019, 3:43:05PM

I don’t think I have ever blogged about how much the simple sl management tool has helped. Just today someone was thinking they had an extra three weeks. Without any anger or frustration or argument all I had to do was sl roll <name> | xclip and dump it into an email. When I use mutt I just do it all directly from the editor. This single thing, dumping the current or past schedules into every email correspondence with parents and members has reduced late payments and misunderstandings to zero since I’ve been using it. It is such an example of how having the ability to program to one’s own needs is ridiculously empowering. It’s also an amazing example of how quickly anyone can create command utilities (with integrated tab completion) using the Complete Commander (yet another thing I need to polish up and publish more widely). Here’s the output:

 1. 2019-05-11 16 HERE 
 2. 2019-05-18 16 HERE 
 3. 2019-05-25 16 HERE 
 4. 2019-06-01 16 HERE 
 5. 2019-06-08 16 HERE 
 6. 2019-06-15 16 HERE 
 7. 2019-06-22 16 HERE 
 8. 2019-06-29 16 HERE 
 →  2019-07-06 16 PUSH 1 of 2
 9. 2019-07-13 16 HERE 
10. 2019-07-20 16 HERE 
11. 2019-07-27 16 HERE 
 →  2019-08-03 16 PUSH 2 of 2
12. 2019-08-10 16 HERE 
13. 2019-08-17 16 HERE (invoice)
14. 2019-08-24 16 HERE 
15. 2019-08-31 16 HERE 
16. 2019-09-07 16 HERE 

 1. 2019-09-14 16 HERE 
 2. 2019-09-21 16 HERE 
 →  2019-09-28 16 PUSH no show but bday
 3. 2019-10-05 16 HERE 
 4. 2019-10-12 16 HERE 
 5. 2019-10-19 16 HERE 
 6. 2019-10-26 16 HERE 
 7. 2019-11-02 16 HERE 
 8. 2019-11-09 16 HERE 
 9. 2019-11-16 16 HERE 
10. 2019-11-23 16 HERE 
11. 2019-11-30 16 HERE 
12. 2019-12-07 16 HERE 
13. 2019-12-14 16 HERE (invoice)
14. 2019-12-21 16 
 →  2019-12-28 16 PUSH New Years
15. 2020-01-03 16 
16. 2020-01-10 16 

 1. 2020-01-17 16 
 2. 2020-01-24 16 
 3. 2020-01-31 16 
 4. 2020-02-07 16 
 5. 2020-02-14 16 
 6. 2020-02-21 16 
 7. 2020-02-28 16 
 8. 2020-03-06 16 
 9. 2020-03-13 16 
10. 2020-03-20 16 
11. 2020-03-27 16 
12. 2020-04-03 16 
13. 2020-04-10 16 (invoice)
14. 2020-04-17 16 
15. 2020-04-24 16 
16. 2020-05-01 16 

Wednesday, December 18, 2019, 3:07:35PM

Just watching this video about vim 8.2 and the part about making extensions using Go is super critical. Vim 8.2 allows any language to use used for writing plugins. As far as I know, NeoVim is not nearly as good, which just validates my position that NeoVim is mostly useless despite the code-base cleanup and more open project team.

I really need to get back into doing quick tip videos again. The fact remains that it is much faster to blog about this stuff than to make videos. So I probably won’t for a while. Those with the fortitude to stick with my blog will be rewarded.

The idea for a video came when he started showing off all the language server stuff, then more language server stuff using the govim plugin that is written in Go (which itself is very interesting since I can integrate rw into go as a plugin as well). The more he talked about all the cool hints and stuff a “language server” gives you the more annoyed I got.

I really don’t like “language servers.” All the hints get in the way and usually when I need to look up the syntax just ? golang whatever is way faster and includes looking up the examples and what others have said about it.

I also don’t need automated formatting during my edit session preferring instead to add it to the save process. If I really do seriously want to fix the formatting just save and reopen.

And all that automated testing that is integrated such as with VSCode is just brain-dead stupid compared to the following simple dump() utility and running go test on a loop. Getting all the output from go test is far more valuable than any IDE/editor integration.

Vim + TMUX + go test loop

By the way, that’s ABNF. I get to be an asshole now and say that probably 0.001% of people reading this will even know what that is or why it is so important for everyone to learn. I also know that probably no one else actually uses this method and very few appreciate the overwhelming efficiencies it provides. Normies will will run to their cushy graphic editors while I continue to destroy their output and lookup times. If that sounds arrogant I’m sorry. It’s the fucking objective truth.

Anyway, that’s why I need to make videos again. Because most programmers learn everything from videos (unfortunately). Until I can get them convinced why words are better than videos I need to play that lame-ass game.

Wednesday, December 18, 2019, 12:44:29PM

Whew! God I’m glad I researched Goldmark more. Turns out there is support for MathJax, the default Pandoc Markdown math notation rendering supported in Hugo. There’s also a full YAML meta data support extension. That leaves only simplified Pandoc Markdown tables unsupported and I can easily convert mine over until someone (maybe me) writes that extension.

The best part about this discovery (and worst part) is that I am now completely free to abandon Pandoc templates, which are not even close to being as powerful as Go templates, which a lot of people already know from Hugo syntax. The only $ and $$ that will be in any source file will be for MathJax, which has become the standard markup for Math and is fully supported in Pandoc for those wanting to render their source into any of the dozens of output formats Pandoc supports.

Seriously, this discovery could not have come at a better time.

You see this is why research and staying connected is so critical to being a Prescient Technology Professional. Had I had seen this I would have fully implemented something that would have had massive technical debt from the beginning and been 10 times slower. This enables me to completely drop the dependency on Pandoc, which required separate process for each page built. This means the rw build will as fast (and likely faster) than Hugo, which is famous for its rendering times. In fact, although Hugo takes a much different approach, this puts WorldPress almost in direct competition with Hugo. I like this because I also like that project and once I’m complete our projects can push each other a bit. Right now Hugo is the dominant leader. Eleventy is trying real hard, but doesn’t have nearly the power. In fact, the goal here is something even simpler to use than Eleventy but with way more power than Hugo.

Wednesday, December 18, 2019, 10:35:06AM

I have decided to add a robust internal http server rather than a dependency on the wonderful Browsersync. I am seriously not comfortable with the insecure state of JavaScript, Node — and especially NPM. Plus there is absolutely no reason rw cannot be promoted from a simple CLI tool to a full end-point robust enough to serve stuff directly for those okay with that option. That would remove the dependency on services like Netlify and even on GitLab / GitHub. In fact, git wouldn’t even be needed. Certainly we’d encourage these amazing services for most, but not as a core dependency (which is what I have now).

I’m also not going to require https (but will support it). The entire approach of HTTPS is largely broken. It depends entirely on encrypting (essentially) the tunnel rather than the content. This means that content is unencrypted on disk. This one fact has allowed some of the largest hacks in history. If people had started with the idea that no content should ever hit disk without being encrypted in a way that requires a hardware token (Ubikey-like) to edit we would never be in the mess we are in now. But people don’t like thinking about that because they think it would be too complex. “Stop boiling the water, Rob.” (Go fuck yourself, voice-of-Ron-in-my-head.) Yet they are okay using web-driven content management tools.

Browser support for encryption is fast these days (as ProtonMail has proven). That means content can even be unencrypted in the browser. But this promotes unwarranted trust between user and browser.

The requirement is visible on the horizon to eventually build a graphic and textual reader that is entirely WorldPress aware. In fact, having such a graphic, easy-to-use tool would put RWP solidly in contention with WordPress and other CMS applications. I could even use the name I used for the now defunct project I started to create an opensource Typora-inspired BaseML Markdown editor, Escritoire (which is the name for the historic writing desks that rolled to close and could be locked). Come to think of it, the name fits even better now! OMG, gives me goosebumps that all of this is coming together. I need to make a project plan for the release of each, maybe even some sort of minimal white paper describing them so I can communicate to others about them quickly.

README WorldPress Escritoire will be the name. This is good because I can build in both RWP and RWX stuff into the single rw endpoint command and then escritoire will just be one of the editor options detected automatically unless $EDITOR is set. I’m definitely not going to build it into VSCode (again no dependencies on the extremely insecure Node/NPM). In fact, that only external dependency will be pandoc itself, which I have often wrote about removing. In fact, I might actually build Goldmark support in by default. I noticed Hugo moved to it (thank God). It has everything I use from Pandoc Markdown except simplified tables and the very critical LaTeX math notation support (which I suppose I could write an extension for).

It becomes clearer and clearer that there needs to be two supported systems within WorldPress: an all native system, and one that allows for Pandoc. Obviously the way to do that is to include some sort of builder extension in the architecture. At least realizing this now will be easier. Hell, I already have that with Workgroups and Jobs so I could even add other build processes such as image compression. I already have GPG signing (and soon encryption). Such additions really would put the entire thing solidly into the “personal publishing platform” space (and almost directly back in line with both the Essential Web and KnowledgeNet earlier directions). We would literally have everything to sort of piggy back or replace the Web:

Imagine if all anyone had to do was write and save locally to publish their content — using whatever editor they wished. DropBox solutions came very close to this, but missed on so many counts.

I worked up this icon design some time ago to coincide with the stuff for S²OIL earlier. The color is the same as the icon for Git.

S²OIL Project Icon Designs

Time to get back to work. I tell you what, though, dumping like this really helps to identify cross-relations and uncover ideas that hadn’t fully formed. If everyone participated in such brain-dumping and reflection the world would definitely be a better place. We just need to help that along.

Wednesday, December 18, 2019, 10:15:54AM

I’m really enjoying the feedback on this site lately. I have a feeling a new problem is emerging: being able to incorporate all the great feedback! I imagine that is what a lot of project team members feel about approving merge commits to software. Having just a few people — or just one person — vet and execute them all is definitely a bottleneck, but a good one.

It definitely lights a fire under me to finish the my rw tool to add back the site index, local search capability, and most recent changes page I had working with the shell version. The temptation now is to do a lot more writing before finishing the first Go version of the tool but new content is less valuable until it is again discoverable from the main site and search engines.

I have temporarily removed the links to the outdated index and changes pages and after discussing it again with my wife have decided never to post anything that isn’t at least in some form of completion avoiding broken links above all like the plague they are. Originally, when the scope was smaller, I used those broken links to remind me (using Muffet) where I needed to add content — really bad idea. Broken links are never acceptable today, ever. I think they even lower your SEO ranking. A better idea is to add a {.todo} attribute to the same spans of text marking where linked content is eventually needed and filter them out along with all the other links when doing extraction. I can cache them all when doing link extraction and incorporate that with rw todo to more quickly edit them. I can even create a front-end hidden console command to activate them all and see what they would look like.

Tuesday, December 17, 2019, 5:22:26PM

Just going through my LinkedIn invites (been ages) and one was there from a “Mergers and Acquisitions” guy. Seemed nice enough. Asked if he wanted to discuss my “exit strategy” and it took me a moment to realize he was assuming that I had started a company and was now looking to sell it off and move on. You know, like the fucking “serial entrepreneurs” I cannot read about without laughing my ass off. My answer would be, “Fuck no. I built this company because I fucking want to work on this shit” (which I imagine is the same response Richard Hendricks would make). Still it never ceases to disgust me how many people start companies with the exit/cash-out strategy on their minds from day one. This kind of thinking is what is destroying America and other late-stage capitalist countries.

Tuesday, December 17, 2019, 3:53:11PM

Gawd! It just occurred to me that the twitter model of followers and lists could be applied in a decentralized way to the entire network. That it can remain a super simply registry without any validation, black listing, stars, likes, up-votes or anything else. Only what the person registers claims for their content. Then those with the README repos can create a simple follow.yaml file that contains every README repo they follow organized according to their own tags and groups and list names. With such a system in place a person need only identify someone and “follow” their README to get access to all the people they follow. Each participant ends up with their own personal database of resources they prioritize over everything else. And with the README.json file being consistent and on everything adding something like rw followers cache (or whatever) to pull all of their RESUME.json files into a local directory is more than doable. That would be insanely powerful search capability onto the local device all without ever having to touch the Internet unless a specific resource had not been cached.

I love this because it brings back one of the Essential Web things I had written off: user-controlled caching. My god I’m excited! This will allow the individual resource caching in a way that is locally searchable and can be easily rendered as text. It also makes it ridiculously easy to copy (for quoting) the text of another’s README. It makes it impossible to shut down knowledge behind firewalls and the like because sharing the information does not depend on the network at all.

Seriously, this project has such wide-reading application and value I causes me physical pain not being able to finish it fast enough (or convince others to help). I can either market and convince, or I can code. I don’t have time for both. If, like Nelson Mandela says “education” (and thereby information) “is the best way to change the world” then this project could be one of the best ways to save it since the Web. We don’t need a crytpo-currency to decentralize. We just need fresh approaches to what we already have and can build upon.

Google will never get behind this effort because it directly cuts them out of the fucking loop. Our global search engine becomes the people we trust and follow. Sure Google can cache everyone’s content and get a real direct picture of them. But even that can be fixed. Since the addition of GPG to the whole architecture it’s just a matter of a configuration switch to fully encrypt one README or all such that only the people to whom an author gives a public key can even read it. Rather than using public keys for signing and author validation they can be used as distributed passwords to those who subscribe to you. It’s up to you to decide if you trust someone enough to give them a semi-public key and allow them to watch. For Google to defeat that and wiretap everyone they’d have to keep tack of all the public keys and could be directly called out for obtaining them illegally.

I wonder how many others have experienced the frustration of seeing clearly what is possible and how to get there and not having the time and resources to do it immediately. I wonder if Babbage felt that way. No wonder he was to cranky. The dude had to revolutionize the tooling industry just to get tools to work on his difference engine to begin with, and sadly of course, he never finished either of his engines.

I suppose this desperation is one reason I blog so much. The act of blogging is trivial at 108 wpm. But it is the dumping of ideas I’m terrified of losing, or having the world lose that motivates me. Sure some things are trivial and forgettable, but if even one of these big ideas is realized in any form I don’t care how it happens. Maybe someone will get the idea and do it faster than me. I’d be the first to join their project if they did.

Tuesday, December 17, 2019, 3:05:50PM

Feeling rather torn about the whole RSS feed decision. I was really impressed by just about everything on Jamie Scaife’s site, which I found from his write up on Pure Darwin but omg his Raspi cluster!. I noticed he has RSS prominent on it and also the note at the bottom:

My website does not serve any intrusive adverts, tracking cookies or other internet annoyances. It’s also 100% JavaScript free.

I seriously loved that and became an instant fan of everything he does.

I got to thinking about the need (if any) for JavaScript on my site. I concluded I am going to keep it, but never depend on it, much like the way I view video or audio content, or even images.

As for RSS, this got me to thinking perhaps the reason he still uses this very dead technology is because it does not require any JavaScript at all. I was considering use JavaScript PWA push notifications (and likely will still add them) but the RSS XML file does not require them at all. His feed is very light-weight and only includes links. That got me to thinking that I could create one that includes each individual blog post as well as any new full articles, and eventually videos and podcasts that are not even posted locally.

The real trick with an RSS is keeping the thing trimmed down and current. In fact, I could have RSS feeds for each month of the blogging in addition to a main one that only contains the last 30 or so posts and anything significant besides that as the main feed.

This all got me to thinking about Google dropping RSS. Yes it was fading in popularity v.s. Twitter and such, but Google has a vested interest for RSS to not work since fewer people would need to search to find content.

So README WorldPress will have the following notification options:

I’ll need to use a plugin architecture of some kind so that notification plugins can be easily added, but there will always just be a single executable. It is already a hassle enough to have to have Pandoc installed.

Tuesday, December 17, 2019, 2:54:09PM

I got another random anonymous comment on the Wikis page:

… You’re welcome! Other than that definitional issue, your website is great, very helpful!

They were extremely helpful in pointing out a clarification in the term wiki. The encouragement prompted me to take a break from coding the time query formatting package in Go to flesh out that page and add another for Head First that has been on my mind a lot lately evaluating O’Reilly books.

It has to be a force thing. Where evil increases “light rises to meet it”. Big thank you to that anonymous helper! 🙂

Tuesday, December 17, 2019, 10:51:28AM

Ok, I’ve added a blurb about Muffet, a great link integrity checker.

Tuesday, December 17, 2019, 10:04:43AM

Got this glorious anonymous comment on my mostly empty JAMstack page:

OMG please take this broken ass site off the web. You just wasted so much of my time making it look like you give a shit.

Pfffhahahaha, my first reaction is to laugh my ass off at the conclusion that I don’t “give a shit.” If that asshole has even a slight clue about the amount of personal time and money I have invested into suffering through helping other complete strangers learn and avoid really stupid shit that will destroy their careers, lives, and the Web along with it, well, of course they would just ignore it.

The good news is that that people are really visiting. I paid the extra $10/month to get my non-privacy-violating stats from Netlify (which is really worth it) showing I have 480 unique visitors per day.

Still, I think that person’s reaction was triggered by opening a page with just a link to the main site. That sort of thing is discouraged on wikipedia and stackexchange and people are just lazy enough to slam those who simply refer to other authoritative resources.

Plus they have no idea what is coming. That ever node provides input into outlines and maps that when brought together compose guidebooks and such. I get the sense that they would have a hard time wrapping their small mind around the idea, content to complain and troll and project their own shitty world views and behavior.

I will say this. Most people who visit this site don’t have the patience for a dynamic site under construction. They are under the impression that just because something looks complete that it is. If I had a dollar every time I read an article that claimed to be complete and realized there wasn’t even an iota of substantial research behind it — even though it had all its paragraphs and punctuation right — I’d be rich.

So I removed the index until I can get it to update properly. And although it makes me sick to my stomach that I have to do it, I won’t publish anything without at least a minimal summary. I will still get people who don’t appreciate terse and colloquial definitions preferring a full, impossible to read, and often dead wrong post on Wikipedia or Stack Exchange.

And as always, all of this filters out the idiots and trolls from the quality people I’ve always preferred to work with. I’m so encouraged to receive so many amazing comments and words of encouragement from those I actually care about. The rest can go fuck themselves. I will always listen to suggestions and criticism — even from trolls with the blackest of hearts — but I never have to accept them.

Sunday, December 15, 2019, 10:27:18PM

Watched Jason Haddix from Bugcrowd give one of the first bug bounty related presentations. I’m sure this is very outdated at this point (2015) but going to take some notes on it anyway:

One temptation that is going to be really hard for me is not redoing a lot of these tools in Golang with full concurrency.

I cannot fucking believe some of the bounties that he shared people have won. One guy found a completely empty authentication Jenkins port on Facebook and made $8000. I mean this is fucking embarrassing for humanity and frankly I can’t wait to cash in on it. I have no shame. This is ridiculous!

Laughed my ass off when he said this about WordPress:

If you happen to come across a CMS, which is the pentester’s dream because those things suck, and the plugins suck, you wanna use …

I cannot overstate how horrible WordPress is for the Internet. Everyone should be doing JAMstack instead.

You can pull it out and use it to your advantage. I have found bugs that were on here bug not disclosed to the customer who had a bug bounty so that was like a super easy win.

OMG! Seriously! I hope to god this stuff doesn’t stay this easy, but strangely I have a feeling it will get worse. They cannot hire people fast enough to make all the software that needs to be made — most of which is accessible over the web — and the number of people looking for bug bounties cannot possible be rising faster than the rate of people putting shit that is untested meaning that the sky is the limit for those who want to put in (what other people would call boring) time.

Then the “intern” talked about his Ruby and grep script (pfffhahaha) to crawl stuff. This guy is presenting at DefCon for God’s sake. I’m sure he’s a wonderful person, but seriously, Go is going to destroy for this kind of stuff. It was born for web crawling and scraping.

I’m really realizing pentesting, SRE, and offensive security are really more core calling and always have been, ever since I caught that hacker on my Linux home server I hadn’t hardened yet in 2004. It is one of the few occupations where you get elated from other people’s idiocy, laziness, and greed causing them to push out stuff that isn’t tested.

It has also completely solidified my commitment to making full-stack web development core to everything I encourage people to learn. All of this stuff requires deep web knowledge, which is a natural growth from web development.

At the end of his presentation he said he has all his notes on GitBook and I couldn’t help but think how much better all of that would be in a README WorldPress repo.

Sunday, December 15, 2019, 10:00:34PM

Moving more into the bug bounty area and discovering some interesting things.

Here are the organizations I’ve checked out so far:

I did have to chuckle a bit that I had to fill out a Google form to get into one of the pentesting Slack rooms.

I have a lot of security auditing and general system administration and network engineering experience so this is just a matter of updating these specific skills.

Sunday, December 15, 2019, 2:51:29PM

While reading a review of cybersecurity over the last decade this article from the MIT Review is perhaps the most prescient I’ve ever read. It covers all the reasons China was not only not phased by shutting down but how China is raping the Western tech world and thriving in the process. This is unfortunately the look of things to come for the foreseeable future. Things are already very interesting, but they are going to get crazy interesting over the next five to ten years.

Sunday, December 15, 2019, 11:20:44AM

Recently I read an article discussing the legitimate use of social “pentesting” on organizations to identify people who would be actually susceptible to things like the “Nigerian Prince” phishing scam. I described the importance for people to learn that over-sharing on social media can be used against them as people pose as individuals from the target’s social circle. That got me to thinking.

Is a blog like this a risk for social engineering attacks?

Is the entire README World Knowledge Exchange a potential contributor to more social engineering attacks?

The answer is no for pretty simple reasons:

  1. No one says you have to use your real name to share even intimate knowledge online.
  2. People have been publishing personal details in novels and non-fiction forever.
  3. It’s really about just being aware (and not stupid).

Anyone can know pretty much everything about another person these days just through public records and that will only get easier. Defense against social engineering attacks comes, quite simply, from having a clue and being just paranoid enough not to trust anything anyone ever says. Having been married to a New Yorker for a while now — who is exceptionally friendly and trusting once she gets to know you — I’ve learned a lot about this. It is the same reason you always lock your car doors.

Truth is, if you seriously believe a Nigerian Prince has money for you, well, as Gifoyle would say, it’s more about “natural selection” than anything else. Humans like this have been phished in broad daylight by televangelists, send-me-a-dollar campaigns, pretend conservative newspapers run by flaming liberals just to troll the money out of stupid people who want to read all the reasons they are right.

Let’s call social engineering what it usually is: fraud. And it is nothing new. The defenses are the same.

Hiding from the world and being scared of social media is not the answer. Being careful about what you reveal and pretty much anyone who might claim to understand or know you is.

In fact, I would make the case that the anonymity of the Internet is the problem in the first place. Trolling only works when people believe the account triggering everyone else is believed to actually be real, that the words are not meant to trigger flame wars and over-reactions. That is why trolling works. Trump got elected from it in 2016. It is very well documented perhaps best on the Cyberwar documentary from Vice. Less anonymity, less blind trust, and more attribution would cure the social engineering problem.

The simple truth is if your organization contains people who have to be trained to possess these abilities and behaviors you have far bigger problems. Is it okay to phish them to see if they exist so you can train or fire them? Hell yeah.

Saturday, December 14, 2019, 4:49:28PM

The README World Knowledge Exchange and README WorldPress Personal Publishing Platform are the culmination of thoughts, ideas, and hard work from community members for over six years. Stuff is finally taking form where we can polish it and start releasing and promoting it. I’m so very excited. I do hope other members can accompany me when presenting about it over the coming year. Still a lot of work left, but it feels like we are so close.

Today we just hashed out the GPG signing option and we’ll be working on it this week. I need to rebase the project to get all of the small-scale commits cleaned up first.

Saturday, December 14, 2019, 11:27:11AM

Analogy of The Spotter

I love the analogy of the “spotter”. I really need to capture it in an article at some point and share it. Basically it goes like this:

“When you are lifting weights to get strong, say bench press for your chest, often you a spotter to help you out. The spotter’s job is to help you just enough to keep progressing and make sure you don’t drop the weight on yourself. Often they will just use a finger or two to get you just past the hard part while you lift the weight. They don’t just point and laugh (like that Simpson’s character) and they don’t just lift it for you. But they can’t lift it at all without your pushing. So do some lifting and I’m here to spot you when you get stuck. If I tell you exactly what to do that’s like I’m just lifting it for you and you won’t get stronger, but I’m here when you need me, spotting you.”

Unfortunately, this is one of the least understood concepts in mainstream education. I never see it. Usually students are left to fend for themselves, or are given the answers too easily. To be a good spotter you have to exhibit traits that a lot of public school educators do not. The following are required:

Friday, December 13, 2019, 8:29:18PM

So the Press names have me thinking, humm:

That last one is rather sinister since it would pull up in searches for wordpress as well. The full name could be README World Press and the short versions could be READMEPress and WorldPress. I wonder if README WorldPress would work, it’s different enough from WordPress while still invoking (and borrowing) from the meaning of WordPress. People will get what it is, at least the main gist of it. Yeah I like that one. Then if someone says, “WorldPress” and someone else says, “WordPress?” then the response can be, “No, README WorldPress” to clarify. I also love that it invokes the “Hello World” idea as well.

This is entirely a labor of love, but the possibilities are huge. There is simply nothing even remotely close to this tool out there. Not only does it allow the organization of ones knowledge source and content and the generation of progressive web apps from it, but it allows a person locally to search their personal README repo knowledge base using simple and complex search syntax and display the output either as a pop-up local web page, or as color terminal text, your own personal man page system.

But the part that is going to blow people’s minds will be the ability to subscribe and sync to others README repos and locally query them as well from the terminal or the web. There simply is nothing like this on Earth at the moment.

I have to keep telling myself “this is just for me” and not get disappointed if the world doesn’t adopt it. But my god is it amazing even if I do say so myself. Hell, people might just use it for the ability to write in Pandoc Markdown and nothing more. The table syntax alone is worth it.

Friday, December 13, 2019, 1:56:22PM

I swear to god my best ideas come in the shower. Once again it will seem like a small thing to many, but it’s huge to me.

README World Press, Personal Content Management System and Static Site Generator. Share what you know.

That’s the new title and tag line for the project formerly known as “README Repos”, a part of the S²OIL initiative.

Friday, December 13, 2019, 12:20:10PM

So glad one of my older community members reminded me that the XPS has eight core CPUs (not four). I don’t know why I forgot that. It makes so much sense. Having recently reviewed all the best laptop for hackers options I was putting the Razer Blade with six cores ahead of the XPS. I think that was a mistake now since the one limiting factor with VMs above all others is number of CPUs. That makes the only downside of the XPS the crappy butterfly keyboard. The XPS can still use my old Mac Thunderbolt monitor. That makes the overwhelming leader still the XPS for pretty much anything professionally.

Thursday, December 12, 2019, 6:04:45PM

Younger community members are really happy that Minecraft 1.15 is out with support in Spigot and FAWE. Seems like Microsoft must be working more closely with the Spigot team to get these releases out right around the same time are their official releases. There used to be a month of lag.

Thursday, December 12, 2019, 3:28:52PM

Rob Pike implemented map/filter/reduce in Go as an experiment and then told everyone they “shouldn’t use it” and to “just use ‘for’ loops” instead:

I wanted to see how hard it was to implement this sort of thing in Go, with as nice an API as I could manage. It wasn’t hard. Having written it a couple of years ago, I haven’t had occasion to use it once. Instead, I just use “for” loops. You shouldn’t use it either.

Thursday, December 12, 2019, 3:14:09PM

The wonderful case study from Jake Wilson should be mandatory reading for any Go developer. His memory issues with slices cast from underlying bytes that never lose their reference is so important for everyone to understand. This is where understanding C really helps to understand Go and its garbage collection — especially when it doesn’t behave as you expect. Slices are just structs with pointers at the end of the day. Never forget that the thing they are pointing to might hang around even after your specific slice isn’t used anymore.

Wednesday, December 11, 2019, 9:49:10PM

It’s official. Some community members and I are now registered to start hacking legally on both HackerOne and BugCrowd.

Let’s be real.

Most of these “hacks” are just the result of watching a scanner and doing some minimal forensics. I used to write that stuff for IBM so this ought to be fun. I lived for layer 7 protocols, wrote cross-platform, distributed Perl binary executables (yes binaries) to securely coordinate audit compliance checks and patches on 10s of thousands of *NIX servers most of which in the fortune 500. I did that, I really did. I suppose it is good to look back at our accomplishments from time to time — especially around your birthday. Time for this Mr. Rob to remember that Mr. Rob. And no better day than today. Look out all who are too lazy or stupid to secure your systems! We’re coming to cash in on you, legally. You can pay now to do it right, or you can pay way more when we find you. I love that all of this is so legit now. It’s about time.

The worst manager I ever had used to literally yell at me for “boiling the water” for uncovering bugs that fucked with his deadlines. Hummm, I wonder IBM bounties are up. It’s time to “boil some fucking water!”

Wednesday, December 11, 2019, 9:42:26AM

Rediscovering VSCode and it has improved over the last six months, again. Before it was bloated crap. The secret seems to be get used to using the bare-bones editor with very minimal extensions (if any at all) and instead learn to depend on the Bash command line for most things, such as Browsersync.

Tuesday, December 10, 2019, 11:23:19AM

Been thinking a lot about how much knowledge retention has in common with knowledge sharing and specifically the importance of note taking, good books, annotating them and how all of it is impacted by the digital transformation hitting education.

Motivated WGU Student Notebooks

There’s a lot of synchronicity happening in my work and research at the moment. I’ll try to summarize it.

I need to write some sort of guide about just the topic of note taking, annotations, and why you shouldn’t mark up technical books (so you can return them when you finish them before they get out of date), and how all of that solidifies learning through neurochemistry. It is a core component of the S²OIL initiative. I wonder what the name would be, humm:

All of this has prompted me to specifically add another standard README format specifically for book annotations.

Monday, December 9, 2019, 9:02:00PM

Ugh! Even though I will never use or recommend freeCodeCamp to anyone ever again after this mess, I am still having those who have worked hard over the last few months to complete their first certificates. Tonight, however, someone finished one of the final projects and got a green pass only to submit it to the system for validation and checking off and say they have 80% left on it with no indication of what was still missing.

The lesson learned here is that none of the “free” tools out there are any good, not at all.

Instead, it is always better to get an even slightly outdated book with projects in it and do those projects correcting and learning as you go. That has been the best way to learn from the beginning and continues to be.

And — most of all — get as much technology out of the fucking way. Technology just fucks up the learning experience with frustrations because the tool doesn’t work, not the material. It is bad enough finding solid material and content, it is literally impossible to find up to date content combined in an “edtech” tool that isn’t a piece of shit.

I am so glad to have realized this. I almost put a lot of energy into skilbots that would have been entirely based on the false assumption that challenge technology makes learning more fun and effective.

The absolute best way to engage and learn is do something you are about, without artificial motivation.

Sunday, December 8, 2019, 3:17:47PM

Had someone write on my Medium blog that I definitely need to read YDKJS and I just laughed. Two years ago I thought, “why not?” I even created a PWA out of his book since he makes it nearly impossible to consume his “free” version, hypocrite. I followed him for years, finally I had to block him and add him to my clueless group. I have nothing good to say about that dude other than he has a very nice conversational writing style. Here’s an excerpt after justifying why you would still use IIFEs.

YDKJS Foolishness

Yeah, just no. That statement is factually wrong and practically every best practice book and organization now recommends against var while still teaching it because it is so completely and totally rampant. The let keyword was added because of how badly var fucked up so much code. You know, for similar reasons that strncat() was added to replace strcat().

One-trick-pony programmers like this guy really get on my nerves because they are fucking dangerous — especially when they spew seriously dangerous ideas like this. I would fucking fire a programmer for excessive use of var when let would do. And if they didn’t understand why I would tell them to do the research.

I seriously wish this shit didn’t bother me so much. When challenged on it I spent a full hour reconfirming my research ’cuz that is the kind of obsessed person I am. I have no patience for others unwilling to do the same and write lame, anonymous comments with nothing to back it up.

My problem is that I care too much. Others would be, like, “huh” and look back at the code they are working on. Such obsession is definitely one of my biggest strengths and flaws.

Sunday, December 8, 2019, 5:34:14AM

Watch Out for Unicorn Shit!

Davy Jones

Why so grumpy this early Sunday morning?

Well, because after reading this summary of a “#1 best selling” book about how to transform your lame-ass business into a “Unicorn” I just can’t help it.

Unicorn Project Summarized

Gene: “Let’s see, what buzzwords can I put in the title of my shitty book that will get me the most random Kindle purchases that people will never read. I know, ‘unicorn’, ‘disruption’, ‘thriving’, and of course ‘data’!”

Random referral: “The Unicorn Project clarifies the what and why of digital transformation.”

TLDR: If your company sucks enough that a book like this would actually help you, if you need to actually read this incredibly obvious shit in order to know to do it, well, you deserve to fucking die as a company immediately and get out of the way so people and companies that matter can take your place.

I would bet Mr. Gene has never seen a Linux terminal in his life or wouldn’t know what to do with one, you know, like Terry Colby. But he sure knows how to use emojis on his iPhone and shitty Macbook Pro.

Terry Colby

I’m sure Terry Colby would be first in line to get his book, which, of course, he would buy and never actually read.

Exhibit A - Gene Doesn’t Know Basic Vim

My favorite part is his using IntelliJ (mostly for the world’s most used dead language) or, of course, VSCode that all the kids are using now-a-days just ’cuz even though he admits to having problems doing basic replacements that vim does consistently better without hunting for some fucking GUI widget he continues to use Microsoft’s “cool” editor and put up with learning some shortcut that might change next week.

“Oooo, multiple cursors, 🙏❤️🎉🦄 That is so awesome!! I love it!!! So great!!!”

I mean, does this guy know how fucking stupid anyone looks who types that many exclamation points? Or even one exclamation point?

This emoji/bang freak is apparently an “author” with 42k Twitter followers and an book on WSJ bestsellers list but doesn’t know about VIM relative line numbering, or worse, how to fucking Google his own answers. And why should he with that many followers?

To everyone out there fighting imposter syndrome I will just say, many clueless people out there are successful who should have imposter syndrome but don’t.

I’ve never met Gene. He sounds like a fantastically friendly guy and given his over-use of emojis and poor use of punctuation I imagine his bubbly personality is much like what is on display in this thanks-for-the-money/smile-for-the-pretty-conference-birdy photo.


I confess, Gene probably doesn’t deserve to be the target of my angst and frustration, but what the hell. I’m a bad person.

This whole thing flooded me with memories of a phone interview I had with a pompous corporate poser who clearly delineated their company “mission” to me:

“We study the Unicorns carefully and center our research on their successes, then we work with our clients to recreate those successes for them. We follow the Unicorns and help others become Unicorns. We do have groups doing our own research, but mostly we just help others reproduce what has already been successful for the Unicorns.”

Those were his exact words. I shit you not.

I about laughed out loud on the phone. I was very polite, but the interview was fucking over! (#AppropriateBang) They massively failed.

I don’t want to chase Unicorns. I want to be the fucking Unicorn — and so should you.

I see courageous, truly brilliant CTOs and engineers leaving amazing companies to start their own companies they believe in (like Oxide and those behind R-Socket) and am reminded that is what people should shoot for, not following Unicorns so close up their asses they get kicked in the teeth when the Unicorn “pivots”, or step in the unethical rainbow shit they drop as if in some sick Silicon Valley fantasy parade.

Do I sound bitter and a little sour-grape-y?

Maybe it is because Gene’s perception-managing, carefully cultured persona sickens me. It’s not even about Gene. It’s about what he stands for. I know why he is doing it. Hell, I did it for years, which is why I hope for a better future, one where the massive, unchecked power of Silicon Valley gives way to the original brave, adventurous, brilliant spirit that started it all. I don’t hate Gene. I hate that he still has to exist.

Saturday, December 7, 2019, 11:16:04AM

Had one of my younger members work on CodeCombat a bit today while trying to find something wrong with his Macbook. When I went to CC site I noticed two interesting things:

  1. It was crawling it was so slow trying to keep up with the animation.
  2. It does not use the GPU accelerated canvas element.
  3. It prompted to “Install App” (probably from some PWA-ness).

I wanted to record this immediately to remember later what not to do when making such a site. The use of HTML only is a disastrous design decision, almost as bad as their decision to do the entire codebase in CoffeeScript. I absolutely love the team behind CodeCombat and everything about their goals and motivation. But the decision to use the tech they chose was completely and totally stupid. I was just reminded of that.

Friday, December 6, 2019, 7:28:27PM

Just discovering R-Socket and I gotta say it looks really solid so far. Everyone can kind of sense the failure of microservices that are naturally converging and hitting latency issues related to HTTP request/response. The issue is that people aren’t using microservices as they were originally envisioned where one group would maintain it’s own microservice for it’s department and plug into the rest of the enterprise using that interface. Makes me wonder of Bezos just codified his model — from Conway’s Law — into something the whole world is now stuck with. Bezos probably didn’t originally see the collision of microservices with containers. Once containers came on the scene that original, independent, imaginary department maintaining their own microservice stopped being a thing. Suddenly all these independent things were living essentially on the same Kubernetes cluster. Then you see jokes like these (which are hilarious):

Dev Oops

Imma say it. I felt this coming. I even blogged about how DevOps was really overrated.

It’s just the massive centralized <-> decentralized pendulum swinging with its eternal momentum. Except this time, the momentum seems really out of sync. You have companies like Oxide from Brian Cantrill that are moving in the opposite direction creating infrastructure options for those who want nothing to do with the cloud. Then you have initiatives like R-Sockets pushing to speed up the communication between containers in a centralized DevOps cluster. One thing is for sure, DevOps will never be the same after this R-Socket stuff.

Oh my God! I just realized something else. This will make languages that have the strongest TCP/IP stack implementations and easiest, most efficient concurrency dominate even further. I wouldn’t be surprised if the core R-Socket implementation is written in (you guessed it) Go. Seriously, this is all just confirming that the best technologies really do win in the end these days. The old days of accidental Beta-Max losses to the VHSes of the world are dead. The world is far too hyper-connected for such crap to go unnoticed for how bad it is. Go is going to continue to dominate.

Friday, December 6, 2019, 3:53:05PM

Recently had an interesting thread on Twitter about GitLab’s policy changes to make it clearer about how it intends to deal with issues like the one GitHub is facing with ICE.

This has me seriously contemplating the issues related to technology services and frankly businesses in general. Bakeries can refuse to make cakes for gay couples. People can complain that GitHub should banish any code that is (as someone determines) related to ICE in any way. Google and Facebook regularly hands over dissidents to the Chinese government, so much so that Amnesty International needs to make it a big deal — and it is. Would Google have revealed Anne Frank?

My biggest gripe is that people tend to oversimplify the issue and come down hard on one side or the other. This is not an issue with clear cut good and evil.

Friday, December 6, 2019, 3:46:09PM

I love Netlify’s analytics. It’s just enough and doesn’t invade anyone’s privacy. Plus it is a way to give back supporting a service that is absolutely one of the top on so many levels: performance, needs addressing real problems, ease of use, pricing and customer service. I am such a fucking Netlify fanboy at this point. I chuckle a little at my 2014 self for preferring Surge because it has a better logo and command line interface. Netlify has really established itself as the dominant player in the most important Web category going forward. Hell, their team even coined the #JAMstack term. Every single book on web development definitely needs to start listing Netlify prominently in some chapter about hosting (along with GitLab).

Friday, December 6, 2019, 12:17:47PM

While reading and annotating the Head First Go, A Brain-Friendly Guide I am noticing how uncanny the similarities are to how I first taught Python all those years ago to kids as young as eight. I added memes and get the project small, Nyan cat for loops, Badgers, the Bridge-Keeper, all stuff that one student later told me made it incredibly easy to call up later. He specifically mentioned that keeping the projects small meant knowing which one was about what so he didn’t have to sift through a bunch of index content to look for it. This one-memorable-meme-per-concept approach was wildly effective. Turns out it is based on the same neurology findings discussed informally in the Head First series of books.

Too bad the Head First book on JavaScript is so ridiculously out of date and just plain bad. The biggest flaw isn’t that it doesn’t cover ES6, it is the entire approach emphasizing imperative programming in a language that is fundamentally event-driven. The battleship project is just downright misguided specifically for JavaScript. It is so important to teach when-then thinking from the very beginning. I mean, this even encourages while loops which are strongly discouraged in modern JavaScript.

This means I will have to add the Head First mnemonics to Learning JavaScript, which is a rather dry book but very well done covering the important stuff. It is very short on examples and projects but covers the material very well.

Thursday, December 5, 2019, 7:55:19PM

Reading back through my GitPod rant and I think I’m being forced to see a reality that is as dark as Elliot’s dreams. People want everything on the Internet. The fact that more than half the Web is still on WordPress just confirms this. Why? Because people just want convenience. They want to fill in a blank here, change a color there, and get their website up. Hell, that is exactly what Doris does (my wife).

Why, then, should GitPod and in-browser code editors exist?

For the WordPress people. Yes. I found a use case. Those people who have a hard enough time giving up their GUI Wix or WordPress interface, but might possibly try a pretty editor if it is also in the browser and doesn’t trouble such users with installing anything.

Okay. Point taken.

I still think developers that depend on such silliness will always be relegated to wearing helmets and have corks on their forks. BUT for the potential new developer who just doesn’t want to fuck around with setting Git up on their laptop or desktop (let’s be real, that is still way harder than it should be), for these users yes, this has potential. I confess I might even introduce it to some as a backup that is better than using the in-browser editor that comes with either GitHub or GitLab. And God knows it is better than because it is closer to the actual graphic editor that would be used. This is also the reason that the VSCode cloud editor from Microsoft makes the most sense.

Thursday, December 5, 2019, 7:31:39PM

Something rather obvious occurred to me just browsing through the list of top hackerone bug bounty winners. Getting big money from collecting bug bounties doesn’t mean shit about your skills. Seriously, you don’t see Santiago (#1) writing his own complicated zero-day exploits or making discoveries like Charlie Miller. Then again, Charlie is often pictured with a Mac (like Santiago) even though he says he “learned hacking on Solaris” (which is incidentally where I got my start as well, not counting all the stuff I did on that Atari 800).

The take-away is rather simple. You can do pentesting on just about anything, but the more cores and RAM the better. Four cores is really the minimum requirement. Advanced pentesting requires broken systems or broken applications to exploit. The easiest (and most efficient) way to analyse these is with virtual machines running the operating system they require. That means VMware Workstation (definitely worth the cost over Virtual Box) running on a system that does well with them. VMs also allow simulating different servers for cross-site scripting attacks from the same computer.

By the way, I was watching the Devoss video again and saw his script. I couldn’t help but think how much better such a tool will be in Go. I can’t wait to port many of these tools to take better advantage of the 16-core concurrency of modern desktop and laptop systems.

Thursday, December 5, 2019, 6:50:22PM

GitPod announced support for GitLab now.

VSCode has done the same with VSCode “cloud” now.

These fucking brain-dead companies can’t even see what is happening all around them. Brilliant people and companies are moving away from the cloud and centralization for well-researched, objective reasons. #serverless is not #computerless.

When I ask these fucking idiots how they plan on doing development while on a public wifi or even on an airplane they don’t have an answer. When I respectfully ask them, “What is the use case for your product?” They got nothing. How the fuck these people get millions of venture capital that they do is absolutely beyond me. I imagine it is because they put on a good show and most venture capital people are dumber than a lobotomized tuna fish.

The only remote use case for these “products” is a world where everyone has Chromebooks are great Internet access. This is the same stupid architecture so many public school districts have bought into. It is also the only way to do anything with an iPad or other tablet device.

All of it is significantly insecure and just plain stupid.

They all claim it is far more secure than an actual computer with responsibly installed apps on it. But no one cares. Apple continues to make macOS more like iOS. The entire movement in these large corporations is toward more cloud, more loss of privacy, more dependency on extremely fast Internet.

Meanwhile more than half the fucking world doesn’t even have Internet access at all.

Mark my words, this house of fucking cloud cards will come crashing down — harder than the WeWork collapse (which I totally fucking called!).

You will see.

It’s not just me saying that.

It’s not FUD. It’s fact.

Those smart enough to connect the dots and motivated enough to do the research see it as plain as day.

Thursday, December 5, 2019, 5:33:56PM

Sampled the Razer Blade 15.6" and I have to say that there are definitely pros and cons to either the XPS or the Blade. The Blade might come with way more connectors and a 6 CPU processor but costs about $700 more than the XPS. After all, it’s a gaming system first and foremost. I could give a shit about gaming on it. I want it for the 6-core processing power and VM potential. It makes sense that it would then come with a lot of stuff that is simply not needed.

I also had a horrible experience just getting answers to simple questions like, “Will the Blade connect to an old Mac Thunderbolt monitor?” (The answer is no, by the way.)

I feel like the keys on the Blade are much studier (again, being a gaming laptop) but the key that I broke on my XPS happened after a solid year of me typing directly on the laptop keyboard. I fucking hate the XPS butterfly keyboard that copied the Macbook Pro is is aiming to replace, but after putting the chicklet keyboard from my Mac onto the XPS (so I can survive without a k key) I have to say there is simply nothing better. These keyboards are objectively the best keyboards I have ever used at every level. My output has already doubled and — more importantly — typing on this Mac keyboard is such bliss it actually makes me want to produce more and stay in front of the computer.

Also, with the external keyboard my fingertips aren’t baking from all the heat rising up from the laptop keyboard.

Honestly, I’m wondering what the fuck I was thinking for the last year. I should have been using this keyboard the whole time. It’s not black. So what. sigh

But the absolute killer of the Blade for me was that even though it has both a mini display port and thunderbolt 3 they weren’t smart enough to ensure it supported the many old, wonderful Mac Thunderbolt monitors. The XPS looks beautiful. I literally forget that I’m not on a Mac sometimes given how much I use the terminal for everything.

Plus I am reading a lot of shit about Razer as a company. They are everything you would expect from a California company, gobs of wasted money on packaging and marketing, usually shitty products. Razer keyboard and mice are regularly laughed at by pro gamers for being as shitty as they are. The laptop was very solid. I will say that. You can tell it is their flagship. But for the price they are asking it is no wonder that Mark Litchfield uses a Dell XPS for his #1 bug bounty hacking versus the many others that use Lazer Blades.

The true test will be when I try to run three VMware workstation sessions on this XPS running Windows 10. I haven’t put it through that test yet. I’ll report when I do. My feeling is that other than the occasional conference or off-site training (like OSEE) that most exploit writers will have a pretty substantial desktop system from which they do most of their exploit work. That means that the laptop need only be powerful enough to complete the training or demo. I cannot imagine a need for that much remote power (at the expense of battery life) for any hacker working remotely. On the contrary, having a system that can remote into that main desktop system is a more plausible scenario for what is needed to get the job done.

Thursday, December 5, 2019, 1:51:48PM

Switching to plain mono-spaced font with no background color consistent with conventions followed in pretty much every black-and-white O’Reilly book ever published. I noticed that some were trying to click on such text thinking the background color indicated something to be clicked.

I continue to be blow away by all the shitty design decisions accepted as main-stream now. The entire Patagonia printed catalog was in 10 point Helvetica font. This isn’t a generational thing, this is designers not giving shit about the science behind how humans read and process words. So I don’t give a flying fuck what people think of my plain design selection. My priority is on knowledge transfer anywhere in the world and all your shitty syntax color highlighting doesn’t print and makes your content ridiculously hard to read by regular people — not to mention those challenged with color blindness.

Once again, people following the “what can I get away with” mantra instead of “what if everyone did it.”

Thursday, December 5, 2019, 1:10:11PM

I really like that every HTML rendering of any README Repo gets the following by default:

Local URL Description
dex Listed by title with subtitle, summary, icon, and meta data.
categories Grouped by category.
authors Grouped by author.
contributors Grouped by contributor.
tags Grouped by tag.
formats Grouped by format.
published Listed in reverse chronological order when published.
revised Listed in reverse chronological order when revised.

Expired content is never included because the rr tool will fail to build anything while any expired content is still present in the overall README Repo.

Currently no other tool that I know of does this. Now if I could just finish the damn thing. It’s hard to stay patient with myself, but so far taking the time to do it right is really going to pay off in the end even if it is nothing more than a system that I can use that can keep up with my pace and method of content production. If it helps others, all the better.

Ya now what this really is? It’s a git-driven, JAMstack content management system. A very solid alternative to both WordPress and any wikis out there. There are certainly other tools emerging in this space, which is exciting, but there is simply nothing of the simplicity and scale as this README Repos project. No one has even conceived of being able to create aggregates of other content out there. Everything is isolated and self-contained and usually database dependent. This entire fucking thing is driven by the ability to cleanly aggregate content locally and externally.

Thursday, December 5, 2019, 12:29:54PM

Getting sucked down the format and categorization rabbit hole. This is where my pseudo-autistic nature kicks into high gear, but it makes me crazy not having it organized in advance. I’ve talked about categories of content and format of content before. First here’s the latest attempt at capturing the standard README formats:

README Formats
Format Description
article Standard article with title and subtitle and sections much like anything posted on Medium today.
numbered Numbered sections as with the specification. While this does automatically number everything in the output the aggregate format might be better when combining content from several README nodes.
spoilers Every section but the first is hidden in “spoiler” fashion where it must be clicked to be revealed. This is otherwise identical to article with {.spoiler} added to every section heading but the first.
sheet YAML-heavy data for cheat-sheets as would be made for vim. Triggers creation of a sheet JSON file and a term ANSI terminal file as well as the index.html file.
log Chronological sections only either forward or in reverse all with a consistent date and time format in every section heading but the first (title).
video A single linked or local video resource.
audio A single linked or local audio resource.

Each format has an standard integer associated with it universally allowing other language-agnostic naming in the future.

I also realized there are other aggregation formats, those that pull together individual READMEs both locally and from external sources:

Format Description
aggregate YAML-heavy collection of content from multiple other README sources both local and external. [This is the wiki killer and primary motivation for this project.]
flashcards Output rendered as set of randomized flashcards with the prompt defaulting to the title of the individual README node. YAML-heavy.
slides Output rendered as slides in any number of slide presentation formats supported by Pandoc. YAML-heavy.

The aggregate format is the original impetus for creating this project. It allows bringing together content that is managed much like files or modules or libraries of software into a single consumable product. This problem has plagued content creators for as long as content has existed. Earlier it was addressed on paper with references and (see also) but providing an aggregate allows content creators and consumers to create their own aggregations, to mix in what they want and need rather than taking it all.

For example, many of the books I recently purchased and reviewed have entire sections that I just want to leave out (some rip) out. Had these books been published as README Repos I could use the content to create an aggregate of only the parts that I want. I could even print only my aggregate with full attribution to the original content creators. The selection of such content into my aggregate is my decision so I, as the overall aggregate author, can ensure that the voice, vocabulary, and quality of the aggregated content is consistent with the rest of the content in my aggregate. This is the power a modular, software-like approach to content creation provides and is the antithesis of the shitty, failed wiki approach to the same problem.

Thursday, December 5, 2019, 12:18:52PM

While going through the list of formats and categories for different files I realized that the FMT_SHEET, which is mostly written in the YAML front-matter could be easily output to a sheet JSON file within directory along with the index.html rendering. This would result in URLs like that would fetch the JSON version while https://skilstak/vim would get the HTML. Then I realized for curl users I could even render a term version that would contain standard ASCII escape sequences for color and such inside of it allowing a curl-specific URL

Then I’m like, “What the fuck?” Why not allow every single README to potentially activate a term output. That would mean each README node could support things that read three different output formats:

  1. Pandoc Markdown
  2. HTML
  3. Terminals

This is one of those times where following a solid design and architecture just because it feels like the right thing turns out to have unanticipated benefits later.

Thursday, December 5, 2019, 11:30:30AM

The thing driving me beyond anything else to complete the README Repos utility is so that more people with knowledge can share it easily to combat the overflowing jump heap of shitty information out there. Here’s an objective study of just such failure. My tool and content might not be the most popular in the world, but very little content is better researched. The rr utility I’m creating will provide the fastest means possible for me and anyone to share their objective facts and research about their conclusions and opinions rather than spewing them in 140 characters on Twitter or behind a paywall like Medium has put up.

Tuesday, December 3, 2019, 6:31:33PM

Officially add the Senior Engineer diagram. Everything feels so solid lately. It’s certainly a process, but never felt more sure of everyone’s specific direction.

Tuesday, December 3, 2019, 6:27:16PM

OMG! I love this white, chicklet Mac keyboard. It is insane how much faster and more accurate my typing is. Brings back memories from when I used a Mac instead. I’m never going back and cancelled the other keyboard order. Nothing can top this. Mint has a nice setting for swapping the Alt and Super keys that covered me. I don’t even have to look and I have the exact same muscle memory. The only downside is that I have a white keyboard with a white cable on my all black table, but meh. Does that make me grey? Grey is good.

Tuesday, December 3, 2019, 2:47:02PM

Further research reveals that successful pentesters, those making the big buck with bug bounty programs, are still using the Dell XPS (and other computers) that have fewer cores and RAM than the Razer Blade. But these folk aren’t writing zero-day exploits that take advantage of operating systems as much. Mostly they seem to be owning the web servers, apps, and databases they are connected to — particularly WordPress, which reminds me, PHP is mandatory learning for anyone getting serious about bug bounty money.

Tuesday, December 3, 2019, 10:33:30AM

I’m very sorry to have to write that my Dell XPS 9575 has had one key break and the others look like they are ready to break as well. It is still under warranty so I am returning it and getting a new one. I will be researching the Razer Blade 15.5 since it is apparently the preferred laptop for cybersecurity professionals give the 6 CPUs it has providing far more virtual machine potential. VMs are essential for analysis and running Windows for exploit development. I actually read about these baselines from the OCEE requirements. The XPS is the right pick for web development, for sure, but lacks the horsepower of a hacker’s primary laptop.

Also in other news rediscovering the Mac chicklet keyboard is amazing. My typing speed is so much better. I don’t think I realized how much I loathed that keyboard on the XPS until after using the Mac keyboard for a few minutes. Seriously, the XPS tried to be a little bit too much like the Mac Pro, including the shitty keyboard that the new Mac Pros don’t even have anymore.

I really need to update the standard system recommendations on the site. Ugh. I need to update everything on the site. Having this laptop stop working was a major inconvenience.

Monday, December 2, 2019, 2:58:26PM

Everyone needs to endorse the Contract for the Web. Ironically starting with the Essential Web initiative I started with my Pros some five years ago I have been obsessed with little else. The Web is being destroyed by small-minded JS-JS-JS advocates would have no one on Earth producing any content for the Web that they didn’t create an app for first. God it infuriates me.

Monday, December 2, 2019, 1:43:06PM

After a lot of review of requirements for OSEE laptop requirements I’m changing my official pro developer and cybersecurity engineer recommendation to the Razer Blade 15" with 6 Core i7. It’s no surprise that it is also the number one gaming laptop in its class. At $2200 it certainly isn’t cheap, but it is such better buy than any of the $3000 Mac Pros.

Honestly it is the little things:

Sunday, December 1, 2019, 5:30:06PM

Feeling really good about the four best books everyone needs. It has taken a lot to read them enough to make the assessment but I’ve never been more confident in these materials.

Now comes the hard part, writing the annotations for each. First I have to complete the rr tool I’ve been using to build out this site and blog, then I will be reading and annotating for a month of so.

Sunday, December 1, 2019, 12:53:23PM

Categorize this first December post under ranting. I’ve never been more sure of my direction and looking back I now realize how fucking solid it is:

This year has been flying by. Went through some of my old YouTube videos and had to really laugh at myself. I had to cringe at the bad memory of completing an entire web site only to have to trash the whole thing, all the code I used to generate it, and replace it with the one I have now. But I’m so glad I did.

Back then I was still catering to those younger and more casual potential coders and hackers. We had the van for trips and I was still teaching several people at a time. Just before the move I posted a video reducing “class” sizes to three, like before when I started.

Then we got the wonderful news from our shit-head land-lord that he wanted the place for himself despite several verbal promises we could stay over six years and not missing a single rent payment despite having to personally deposit cash into a specific Wells Fargo account under a specific account number, a practice that is now banned by Wells Fargo for how fucking shady it is. Based on a reconfirmation in person, I chose to spend the $10k I had saved for potential moving expenses or down-payment on new computers and desks to keep up with the anticipated demand for game development, something I wish I had never entertained now. The guy was the epitome of conservative, finance-bro, football-moron with ill-gotten money. I am so tired of running into these types of Americans who live by the what-can-i-get-away-with ethic. At one point he actually uttered the words, “I don’t do empathy.” This was after I sent pictures of razor blades and roofing nails pointing straight up surrounding both of our entrances during a Minecraft camp scheduled far in advance. That is not an exaggeration. Seriously, if there is a hell, he will have a spot there for sure.

Honestly, I fucking hate game development. Sure making a small game is fun, and it is certainly fun to watch young people learn to code with it, but there are so many more interesting things to do with technology, so many more important things.

In fact, when a person tells me all they want to do is learn to make games I find myself much less interested in sharing any of my very limited time with them. For many years I have been fighting with that internal conflict. Usually those learning to code web sites and simple games move on to other technical things, but honestly, you know someone who is serious from the very beginning. These days the only people left (and accepted) are those who are serious from the beginning. Ideally, I get people who are only slightly interested in even playing games, like Nir Gaist who was never into games and became one of the greatest white-hat security professionals the world has ever known.

Besides, legitimate hacking is way more entertaining than any game out there at all. It takes actual skills that go far beyond having good spatial awareness and being able to twitch faster than the other guy. While computer gaming certainly does more for your brain than television, it cannot touch what the research and practice it takes to hack.

When I was young I enjoyed Zork more than any of the other computer games. It took real intelligence.

I also remember one Bruce Lee game I copied from someone and I enjoyed randomly deleting different sectors on the disk with a floppy hacking tool far more than actually playing the game after I hacked it. Directly manipulating all that binary data to this day gives me warm, fuzzy memories.

As for those who aren’t into hacking but love building computers and robotics and making devices, well, they are my kindred spirits as well, and frankly, I don’t give a fuck about anyone else. Life is far too short to suffer through forcing anyone to learn anything, or even forcing myself to pretend I’m interested in making little pixels move around the screen, even if there is money in doing it to make engaging ads.

I definitely appreciate the artistry of well-made games. [I’m the guy who stares at objects when Overwatch is queuing up the next game.] In fact, games are art. The coding is just the glue for the art and design.

Despite suggesting hundreds of times that someone use their game development skills to make an educational game worth playing not a single person has ever taken me up on it. Too bad really, educational games are where the money is. If you can make even a half-decent game that will run on the average Chromebook these shitty school districts are buying everyone then you automatically sell it to all of them at the same time.