@drawohara   ā¤ļø || šŸ–¤

they say i am a big fat šŸ¤“ ā€“ and they are right!

for some strange reason, this makes me happy:

as you can read on my about page, i have written way too much software.

i got my start researching in NOAA

for C.I.R.E.S

while studying at CUā€™s College of Engineering & Applied Science

wut?

basically, the university has a program that donates young scientists to other research institutes, to help do science.

its goal is getting the universityā€™s name on papers which, if you know anything about science, is gold. publish or perishā€¦ etc. publishing == funding.

30 years later, i realize that this was a fantastic introduction to start-up culture. no b.s., just, make shit that works, and go. no one to tell you what ā€˜not to doā€™ or market signals that design your product for you so you donā€™t have to actually think and be bold - just raw instinct about what should be studied, how, and why.

that, and fundraisingā€¦ nothing like buildig stuff and figuring out how to pay for it at the same time ;-)

the first project i did at CIRES, is still one of my favorite projects of all time: we wrote a system, designed to run on old-skool linux field computers, that forest fire fighters would use, tactically, in the field, to decide weather/whether or not (pun intended), sending a crew up a canyon to battle the blaze would result in them dying. mainly it was a wind analysis tool, hyper local weather, delivered to a device, long before iphones became a thing.

(this is my explanation for why, when the los angeles fires erupted i hopped right on my bike and went to check them outā€¦ fires and the jobs responders are required to do, for $26/hr, astounds me)

subsequently, i went to work at FSL (Forecast Systems Lab) doing hyper-high-availability (5 9s ((99.999 % uptime))) for operational satellite ingest systems.

we designed cutting edge systems. and novelā€¦ brutalā€¦ methods of ensuring consistency of classified data such as STONITH, which stands for ā€œStone The Other In The Headā€, a method used in what were then cutting edge high-availibilty clusters that would manage taking over as ā€˜masterā€™ (a term since banished from software, probably for the bestā€¦) by literally toggling the power of the other node, to be damn sure it was off. things we simpler then, but also very complex. there was a lot to invent on every project. sass wasnā€™t even a word.

i also did a lot of work in model verification: geophysical models take hundreds, or thousands, or even hundreds of thousands, of configurations to run. people talk about how neat 12-factor configuration is now, and i just shake my headā€¦ what if you had to manage millions of configuration values? the next trick is version them, so we know how they change over time because, as scientists, if we make a change to say, a cloud physics model, we need to ā€˜test itā€™. but

how do you test software, when you donā€™t now the ā€˜right answerā€™?

the approach is actually, theoretically, simple:

you hold all variables, all the hundreds of thousands of them, constant, make changes to a few, and then look for patterns of changes in the output. in the case of weather models, this could be mean that a change to a cloud physics model resulted in predicting 8/9 historical storms with accuracy, vs. the 7/9 a previous iteration would have predicted.

this type of analysis, foreign to many engineers, is back with vengance,

thanks to AIā€¦

my next stint was at The National Geophysical Data Center, where i was able to participate in a bunch of super cool research:

and built very, very large super-compute, essentially big fat map-reduce style computing but, at the time, neither of those terms existed. we had to invent novel ways, of moving our code of off big-endian (not spelled wrong) cray (also not spelled wrong) machines and onto tons of commodity hardware. namely, hundreds of linux boxen.

i also did a ton of work around clusteringā€¦ very low level c/c++ code, using ideas from signal processing and computer vision, to detect the edges of cities via a process similar to the watershed algorithm butā€¦ at scale.

throughout my tenure at NGDC, i was allowed to release piles of open source software and, i am very, very grateful for this. eventually i was able to share, through oss, over 200 open source projects enjoyed by many. i think this was foundational to my eventually winning a ā€˜ruby heroā€™ award and wish that more young engineers had creative time to just build things. this, is where true innovation comes from i believe. not board rooms or from mining the data to just give people what they want. which, is probably potato chips.


coffee breakā€¦


next, this cowboy hired me: to compile the GNU scientific library on.. wait for itā€¦ windows!

yep, i am that old!

(strange that, for the first time ever, i would actually now consider owning a micro$oft computer butā€¦ only because they run linux ;-)

anyhow, Greg worked for Don Springer, at company called Collective Intellect. which, at the time, was the ā€œMobius Groupā€ (which would eventually become The Foundry Group andā€¦ #BOOM .. start-ups in Boulder, Colorado, were a thing.

it was fun time.

it was after this that i started dojo4, which was the crown jewl in my life as a geek, for many reasons i hope to write about soon. including close to ten years mentoring techstars companies where, i have made some super duper great friends.

until then, i will say, as i always do that:



this all new `nerd blog` is definitely a work in progress, but you might enjoy the dojo4 archive, which contains some of my previously nerdly writing. and, for a peek and what i am working on now see disco...

i'll be adding to the below list of articles and plan to be doing quite a bit of writing about ai, ruby, embedding models, groq and vespa to name a few... one thing i do enjoy about the ai revolution... so much to do!


  1. /nerd/fastest-possible-embeddings
  2. /nerd/ima