Friday, November 10, 2017
BMapp update
Well, the Burning Man Map App is coming along nicely. It's at the stage where I'm willing to show it to a UX designer to get some tips. I've also moved it to another host, served from here.
There is still a list of things that needs to be done, but I think it's really getting there!
https://garbage-collector-acronyms-27351.netlify.com/
There is still a list of things that needs to be done, but I think it's really getting there!
https://garbage-collector-acronyms-27351.netlify.com/
Saturday, October 7, 2017
Update and partial FAQ
These is a FAQ for the questions that I frequently imagine that I am being asked. As you will see, my internal interlocutor can sound a bit judgey.
* "Rather than its being a public solo project? A publicly ignored solo project? A project that's so stunningly fun that someone will fork it and do something more interesting?" asks my internally judgmental interlocutor. "Don't worry about any of that. Open source it! Do it! Do it! YOLO!" he shouts.
What's going on with BMapp?
I am continuing to work on BMapp apace. I'm keeping it on a private git repo at Bitbucket for now. I may open-source it at some point, but there's something irrationally nice about its being a private solo project*. I've committed to Inferno, Redux and Rx. I'm looking at redux-observables, since it's using two libraries I'm already using.What's up with all the Bookmarks lately?
Well, there is not really a public social bookmarking application anymore, since the demise of del.icio.us. When I find or build one, I'll move all of those to it. There are a lot more in my chrome bookmarks. If I added them all, it would look even more like spam.What's up with using Blogger? I think my grandad used Blogger once. All the cool kids use ${some_other_thing} or roll their own.
Rolling my own or moving it is in some distant future plan, a bit like my social bookmarking plans or the open-source social media plan. I started this blog waaay back nigh on 10 years now to keep track of a specific collaboration, almost immediately abandoned it, and rebooted it when I felt the need to have a public notebook. It is quite limited, but so far I have not bumped up hard against those limitations. It does the job. The risks are that it will be sunsetted by the notoriously fickle Google. Or that webstandards will leave it far behind. (Does it even have any full time staff?)* "Rather than its being a public solo project? A publicly ignored solo project? A project that's so stunningly fun that someone will fork it and do something more interesting?" asks my internally judgmental interlocutor. "Don't worry about any of that. Open source it! Do it! Do it! YOLO!" he shouts.
Thursday, September 28, 2017
Bookmark: Font and Font fallbacks
Various resources dealing with fonts on websites.
Front End Center — Crafting Webfont Fallbacks (YouTube)
A Comprehensive Guide to Font Loading Strategies
fontfaceobserver: Webfont loading js library
Font style matcher + github
CSS Font Stack
Quick Tip: How to Consistently Render Typefaces Cross-browser (video)
Type Rendering Mix (library referenced in preceding video)
Wednesday, September 20, 2017
Bookmark: Offline-only content
I may consider doing something like this for the BMapp: Offline-only content.
https://chris.bolin.co/offline/
https://chris.bolin.co/offline/
Friday, September 15, 2017
Bookmark: The Jupyter Notebook
The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.
http://jupyter.org/
Thursday, September 14, 2017
Bookmark: Web Workers
Web Workers makes it possible to run a script operation in background thread separate from the main execution thread of a web application. The advantage of this is that laborious processing can be performed in a separate thread, allowing the main (usually the UI) thread to run without being blocked/slowed down.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API
Baby perspective on adults
Adults probably seem kind of like monsters to small children. Besides the obvious things, like that we are large and talk in loud, deep voices; we also do weird, dangerous things in the kitchen like boil water and cook food, which we then eat as hot as we can handle it. Hot coffee, hot soup, food with molten cheese toppings and such probably seem like monster food to babies; which we gobble up with huge bites.
But for that matter, babies seem like weird little creatures to older children. Especially newborns who can't do much except cry and just lie there like blobs. Adults are always boiling their bottles, wrapping them delicately in sterilized fabrics and such. And they smell bad quite often.
Like adults, really, if we're being honest.
Wednesday, September 13, 2017
Configuring Typescript in Visual Studio 2017
Be aware that there are 2 places to configure Typescript behavior in Visual Studio 2017 (and older, I assume). They do not communicate with each other. The first is the tsconfig.json file at the root of your project (the one you get when you type tsc --init in the command line). The second is in the properties of your project (right-click the project folder in Solution Explorer => Properties => Typescript Build).
If you find your Typescript behaving in a way that is different from what you expect, make sure that both config files are synced.
For instance, despite my explicitly putting'compileOnSave':false in tsconfig.json, javascript files kept popping in next to my .ts files at every save. It was because Compile on save was checked in the project properties.
If you find your Typescript behaving in a way that is different from what you expect, make sure that both config files are synced.
For instance, despite my explicitly putting'compileOnSave':false in tsconfig.json, javascript files kept popping in next to my .ts files at every save. It was because Compile on save was checked in the project properties.
Tuesday, September 12, 2017
Bookmark: Fake data
Bookmark: Stack Share
Discover & discuss the best open source and SaaS toolshttps://stackshare.io/
- Discover new tools and services
- See side-by-side comparisons
- Share your tech stack
Monday, September 11, 2017
Bookmark: Tech Conferences Calendar
Bookmark: CSS Methodologies
Here are various methodologies for structuring CSS:
OOCSS (Object Oriented CSS)
SMACSS (Scalable and Modular Architecture for CSS)
BEM (Block Element Modifier)
SUIT CSS (Style tools for UI components)
Atomic CSS
ITCSS (Inverted Triangle CSS)
Enduring CSS
Stylable is a pre-processor and component library that also has structuring constraints.
More will be added as discovered.
Here is a danger to be aware of.
Third Party CSS Is Not Safe
OOCSS (Object Oriented CSS)
SMACSS (Scalable and Modular Architecture for CSS)
BEM (Block Element Modifier)
SUIT CSS (Style tools for UI components)
Atomic CSS
ITCSS (Inverted Triangle CSS)
Enduring CSS
Stylable is a pre-processor and component library that also has structuring constraints.
More will be added as discovered.
Here is a danger to be aware of.
Third Party CSS Is Not Safe
Sunday, September 10, 2017
Bookmark: DEC64.com
DEC64 is a number type. It can precisely represent decimal fractions with 16 decimal places, which makes it very well suited to all applications that are concerned with money. It can represent values as gargantuan as 3.6028797018963967E+143 or as measly as 1.0E-127, which makes it well suited to most scientific applications. It can provide very fast performance on integer values, eliminating the need for a separate int type and avoiding the terrible errors than can result from int truncation.
http://dec64.com/
Friday, September 8, 2017
CDNs vs. bundling
Remember, this is a personal sketchbook, not a how-to guide nor really even a blog. So I might have some bad ideas on here. Might even be terrible. That's ok.
I created cdnler to help me with a workflow of using Typescript, but without necessarily loading modules through npm. If I were to use Ramda to build a site, I would load (only) Ramda's type declarations using npm, so that its classes and methods would be available to my development environment. But on the web page itself, I'd just use a script call to a CDN. Any project I'm working on might have 5 or 6 CDN calls on one page, and since it's best practice to use CDNs only with a local fallback, for production I'd have to download every CDN asset file by hand, and then hand-code its fallback. cdnler just does that.
Which is fine as far as that goes, but the fad these days, backed by the power of Google, is to bundle up all of your javascript into a single, or two, large, minimized files, and then link to those from within your page. This has the advantage of reducing server calls, since a script reference blocks any other browser activity until that call is resolved. CDNs are another way of handling this bandwidth issue, since in theory, a CDN asset would be cached on the browser, and presumably shared when the same browser visits a different site with that same asset, but, to be honest, that's a bit dubious. Considering, jQuery alone, there are so many different versions, and so many different CDNs that host it, that even sites that use jQuery, and a CDN to reference it, would not necessarily be caching the same files. As well, Google now wants your site to be a progressive app, which makes CDN references problematic. Not impossible, but the tide is against that method.
The expected way to bundle javascript code is to load of external libraries locally using a package manager like npm or yarn. Then, in the code, to require them using node.js syntax e.g.
import R = require('ramda');
and then to use a build tool like browserify or rollup which work by following all those require calls up the abstract syntax tree to know what libraries comprise the app, mushes them all into one file, which is what then goes into the <script> src reference.
Here's where my bad idea comes in. Why not build a companion module to cdnler, that would follow all of the local script references in an html file (<script src="local.js">), then bundle and minimize those local assets? Then it could replace those script references in the html file with a single script reference to the bundled asset. It wouldn't need to do any so-called treeshaking, just put the contents of each file in just a long file of its own.
To recap, this hypothetical work-flow would be to reference external libraries using CDNs during development, and then download them locally during pre-build, then bundle them into a single (or more) local file during build, and then replace the CDN script references in the html with a single a reference to that local file.
What would be the point? I just like working with CDNs. There's something neat about having the world's javascript libraries available to just a single flat index.html. And there's something hegemonically monocultural about using npm to manage all of everyone's code. If there's ever a problem with npm, we're all in trouble. So, it's a good thing, I think, to have a variety of ways to build and create apps.
I created cdnler to help me with a workflow of using Typescript, but without necessarily loading modules through npm. If I were to use Ramda to build a site, I would load (only) Ramda's type declarations using npm, so that its classes and methods would be available to my development environment. But on the web page itself, I'd just use a script call to a CDN. Any project I'm working on might have 5 or 6 CDN calls on one page, and since it's best practice to use CDNs only with a local fallback, for production I'd have to download every CDN asset file by hand, and then hand-code its fallback. cdnler just does that.
Which is fine as far as that goes, but the fad these days, backed by the power of Google, is to bundle up all of your javascript into a single, or two, large, minimized files, and then link to those from within your page. This has the advantage of reducing server calls, since a script reference blocks any other browser activity until that call is resolved. CDNs are another way of handling this bandwidth issue, since in theory, a CDN asset would be cached on the browser, and presumably shared when the same browser visits a different site with that same asset, but, to be honest, that's a bit dubious. Considering, jQuery alone, there are so many different versions, and so many different CDNs that host it, that even sites that use jQuery, and a CDN to reference it, would not necessarily be caching the same files. As well, Google now wants your site to be a progressive app, which makes CDN references problematic. Not impossible, but the tide is against that method.
The expected way to bundle javascript code is to load of external libraries locally using a package manager like npm or yarn. Then, in the code, to require them using node.js syntax e.g.
import R = require('ramda');
and then to use a build tool like browserify or rollup which work by following all those require calls up the abstract syntax tree to know what libraries comprise the app, mushes them all into one file, which is what then goes into the <script> src reference.
Here's where my bad idea comes in. Why not build a companion module to cdnler, that would follow all of the local script references in an html file (<script src="local.js">), then bundle and minimize those local assets? Then it could replace those script references in the html file with a single script reference to the bundled asset. It wouldn't need to do any so-called treeshaking, just put the contents of each file in just a long file of its own.
To recap, this hypothetical work-flow would be to reference external libraries using CDNs during development, and then download them locally during pre-build, then bundle them into a single (or more) local file during build, and then replace the CDN script references in the html with a single a reference to that local file.
What would be the point? I just like working with CDNs. There's something neat about having the world's javascript libraries available to just a single flat index.html. And there's something hegemonically monocultural about using npm to manage all of everyone's code. If there's ever a problem with npm, we're all in trouble. So, it's a good thing, I think, to have a variety of ways to build and create apps.
Thursday, September 7, 2017
Brunch and BMapp
The "Burning Man App" (I'm settling on the working title BMapp) that I'm building has stalled for a minute. Before I dig in, and move from proof of concept to actual build, I want to make sure that the architecture and pipeline are solid. That is to say, I'd like a good framework and a build task that keeps the coding itself relatively simple. For the framework, I've settled on Inferno, because it is light and fast. While its key developer has left (?) the project to move onto React at Facebook, and I'm not sure what that means for Inferno's future, for now, the relative advantages yet go to Inferno.
As to a build task, the Inferno team itself appears to recommend Brunch, an open-source, community-driven "build tool" rather than a "generic task runner". Its main selling point is that it reduces some of the monstrous complexity that some build task runners can encompass. A single brunch-config.js file pointing to input files, plugins and some output directories, and boom, you have a minified, uglified, compressed, tightened, tested web app. No fuss, no muss. Sounds great! Also, during setup I ran into a few errors (that were ultimately caused by a flawed npm release, not Brunch): with one issue ticket the Brunch team was incredibly responsive. They were on it! Even committing to closing out the issue over the weekend! That is dedication. "Windows support is very important to us."
So, when I say that Brunch isn't working for me, it is by no means a slight of Brunch itself or the hardworking team that created it. If you're looking for a build pipeline, do check it out! This post is more a documentation that I tried it, and why I'm moving on.
I generally like to code javascript using Typescript, for a number of reasons, but mostly because it is javascript, only with strict typing. Coding with it prevents me from easily relying on type ambiguity (and the advantages that ambiguity can sometimes confer), and in return, I'm made aware in the editor, even before compilation, of potential problems. Other advantages: I can use the latest and greatest coding practices from ES6 and later (does what Babel does), and my code will automagically be compiled into whatever version necessary.
Brunch kinda has typescript support, but see, that's the thing: it suffers from the same issue that all popular task runners do: you need to rely on some intermediary plugin that may, or may not, be up-to-date. Brunch uses 'skeletons' which are a kind of template or starter pack for different configurations. Their official 'brunch with typescript' skeleton, out of the box, gives you a `brunch-typescript is unsupported, use typescript-brunch instead` deprecation message. Okay, no big deal, I'll just swap out the brunch-typescript plugin for the typescript-brunch plugin. But typescript-brunch has this notation: "From version 1.8.2 up to current version, this plugin may report TypeScript errors that you are not expecting" and "We are hoping to support the full language service, at least for brunch build at some point, but until then...[etc]". Not exactly inspiring confidence. Okay, no big deal. I'll just use Typescript itself to compile to javascript, and once there, let Brunch takeover to do what it does best.
On to uglify js, the minimizer. I followed all instructions as I understood them to get it working and it just plain did not work, meaning, to minimize and 'mangle' the compiled javascript; or really, to have any effect at all. Simply installing the plugin and referencing it in the config file appeared to have zero effect on the code itself. No doubt, and I mean this, it's my bad. I didn't understand the instructions, or didn't fiddle with it enough. Perhaps if I had changed the order of the plugins, or if I had only spent another 10 minutes or half-an-hour, then I'm certain I could get it working as expected.
And, it's community driven! If it's not working, get on the stick! It's not like you can demand your money back. Write up a ticket describing the issue. Ask questions on the forum! Write the bug fix, if that's what's necessary. I get it! But that's the thing: I had already spent a lot of time with it already. A task runner / builder / DevOp tool / what-have-you is supposed to make things easier, minimize errors and time in the setup and manipulation of code. If you're spending as much time on the tool as you would just rolling your own, then it's maybe best to roll your own. My spare coding time, right now, is dedicated to getting this BMapp up and going.
So, for now, it's back to using npm as my build tool. It's simple, it works, and if something isn't there, it's super easy to code up a module that does what you want.
As to a build task, the Inferno team itself appears to recommend Brunch, an open-source, community-driven "build tool" rather than a "generic task runner". Its main selling point is that it reduces some of the monstrous complexity that some build task runners can encompass. A single brunch-config.js file pointing to input files, plugins and some output directories, and boom, you have a minified, uglified, compressed, tightened, tested web app. No fuss, no muss. Sounds great! Also, during setup I ran into a few errors (that were ultimately caused by a flawed npm release, not Brunch): with one issue ticket the Brunch team was incredibly responsive. They were on it! Even committing to closing out the issue over the weekend! That is dedication. "Windows support is very important to us."
So, when I say that Brunch isn't working for me, it is by no means a slight of Brunch itself or the hardworking team that created it. If you're looking for a build pipeline, do check it out! This post is more a documentation that I tried it, and why I'm moving on.
I generally like to code javascript using Typescript, for a number of reasons, but mostly because it is javascript, only with strict typing. Coding with it prevents me from easily relying on type ambiguity (and the advantages that ambiguity can sometimes confer), and in return, I'm made aware in the editor, even before compilation, of potential problems. Other advantages: I can use the latest and greatest coding practices from ES6 and later (does what Babel does), and my code will automagically be compiled into whatever version necessary.
Brunch kinda has typescript support, but see, that's the thing: it suffers from the same issue that all popular task runners do: you need to rely on some intermediary plugin that may, or may not, be up-to-date. Brunch uses 'skeletons' which are a kind of template or starter pack for different configurations. Their official 'brunch with typescript' skeleton, out of the box, gives you a `brunch-typescript is unsupported, use typescript-brunch instead` deprecation message. Okay, no big deal, I'll just swap out the brunch-typescript plugin for the typescript-brunch plugin. But typescript-brunch has this notation: "From version 1.8.2 up to current version, this plugin may report TypeScript errors that you are not expecting" and "We are hoping to support the full language service, at least for brunch build at some point, but until then...[etc]". Not exactly inspiring confidence. Okay, no big deal. I'll just use Typescript itself to compile to javascript, and once there, let Brunch takeover to do what it does best.
On to uglify js, the minimizer. I followed all instructions as I understood them to get it working and it just plain did not work, meaning, to minimize and 'mangle' the compiled javascript; or really, to have any effect at all. Simply installing the plugin and referencing it in the config file appeared to have zero effect on the code itself. No doubt, and I mean this, it's my bad. I didn't understand the instructions, or didn't fiddle with it enough. Perhaps if I had changed the order of the plugins, or if I had only spent another 10 minutes or half-an-hour, then I'm certain I could get it working as expected.
And, it's community driven! If it's not working, get on the stick! It's not like you can demand your money back. Write up a ticket describing the issue. Ask questions on the forum! Write the bug fix, if that's what's necessary. I get it! But that's the thing: I had already spent a lot of time with it already. A task runner / builder / DevOp tool / what-have-you is supposed to make things easier, minimize errors and time in the setup and manipulation of code. If you're spending as much time on the tool as you would just rolling your own, then it's maybe best to roll your own. My spare coding time, right now, is dedicated to getting this BMapp up and going.
So, for now, it's back to using npm as my build tool. It's simple, it works, and if something isn't there, it's super easy to code up a module that does what you want.
Visual Studio 2017 and Frontend Development
Visual Studio 2017 is a great IDE for frontend development that has some extremely picky, opinionated ideas about how you should set up your projects. And since I, too, have extremely picky, opinionated ideas about how I should set up my projects, too, we often fight during setup, VS2017 and I. VS2017 is a cranky old man who hired you to mow his lawn, insists that you wear gloves and smeary safety glasses, that you mow out back before you mow out front, and then forgets what he told you and yells at you anyway. That's only during setup, though. Once you get going, VS2017 is a dream.
If you, like me, use npm; git; a task runner; have a development folder separate from your production folder; and run a web server from your production folder; even if you're not at this point interested in the backend - or want to use an entirely different IDE and technology than .NET; then you can get the good parts of VS2017 without annoying Scripts folders and what-not.
These are the precise steps to follow to turn that Visual Studio 2017 from an angry old man into a cooing, cuddly baby. An extremely useful baby.
(Oh, by the way, this is if you're setting up a new project. VS2017 makes you jump through a whole host of other annoying steps if you have already set up another project, so follow these steps only if you have not yet set up you project. I may, or may not, cover the latter case in another post.)
If you, like me, use npm; git; a task runner; have a development folder separate from your production folder; and run a web server from your production folder; even if you're not at this point interested in the backend - or want to use an entirely different IDE and technology than .NET; then you can get the good parts of VS2017 without annoying Scripts folders and what-not.
These are the precise steps to follow to turn that Visual Studio 2017 from an angry old man into a cooing, cuddly baby. An extremely useful baby.
(Oh, by the way, this is if you're setting up a new project. VS2017 makes you jump through a whole host of other annoying steps if you have already set up another project, so follow these steps only if you have not yet set up you project. I may, or may not, cover the latter case in another post.)
- Open VS2017, then do File => New Project
- Choose Blank Node.js Web Application
- Fill in the fields in this way:
- Name => dev
- Location => the *parent* directory of where you want your project to live.
e.g. /workshop/websites
(Not inside a folder named for your project. e.g. NOT /workshop/websites/MyProject) - Solution => create new solution
- Solution name => *this* is where you put the name of your project's folder.
e.g. MyProject - Check both create directory for solution and create new git repository.
- Click OK.
- From the 'Solution Explorer', you can go ahead and delete the Scripts folder, package.json, README.md and server.ts.
- Right click the 'Solution' node in the 'Solution Explorer' and pick 'Add' then 'New Website'
- Choose ASP.NET Empty Website.
Don't worry. There is no ASP.NET involved. - Web location: browse to the project directory (e.g. MyProject) and create a new folder named dist by adding its name to the path in the field named "Folder:"
e.g. C:\workshop\websites\MyProject\dist - Click 'open'. You should get a dialog box telling you that 'dist' does not exist and asking if you would like to create it. Click 'Yes'.
- Click 'ok'.
- Click through any warnings.
- Go ahead and delete the Bin folder, packages.config and Web.config.
- Right-click on the 'dist' node in the 'Solution Explorer' and click 'Add' => 'Add New Item'
- In the "Name:" field write 'index.html'
- Click 'add'.
- In the editor, add a "hello world" message in the <body> tag.
- Right click the 'dist' node in the 'Solution Explorer' and select 'View in Browser'.
- If you see your index.html message, success!
- Right-click the root 'solution' node in the Solution Explorer.
- Select 'Add' => 'New Solution Folder'
- Name it 'root'.
- Right-click, 'Add' => 'Existing Item'.
- Select .gitignore and any other relevent file from your project folder (the parent of 'dev' and 'dist', e.g. MyProject).
Now these are easily accessible in the Solution Explorer from the 'root' folder. Do this any time during development if there are files in there you would like to see inside the 'Solution Explorer'. - Related, if at any time there is a file that is definitely in the dist or dev folder but is not showing up in the Solution Explorer, check for an icon along the top just under the title 'Solution Explorer' named 'Show All Files'. Click that. Your missing file should show up. You can right-click its icon and select 'Include in Project' if you like.
- Select View => Other Windows => Terminal Window. (Or use Power Shell or CMD).
- In the 'dev' directory type:
- npm init, then fill in the information
- tsc --init => creates a typescript.config file.
- git status => check to see if there are any files that need to be added to .gitignore
- If all is well, this would be a good time to type 'git -m "init commit"'
- Now the world is your oyster.
Saturday, September 2, 2017
Word distance calculator
A friend on Facebook posted this:
Some of the other answers were Night Club, Moby Duck and Sale of Two Cities. I wondered if I could use Uncle Tom's Cabin somehow, but then it occurred to me that it would be fun to have a program that would give all words within n letter distance from some arbitrary word. So, I could enter Cabin and see if there were words within 1 letter distance. I looked online for about 30 seconds, and didn't find it, so that means that no one has ever thought of that, ever.
So, that's my idea. Code a function where ('tale', 1) returns ['sale', 'male', 'pale', 'talk', 'tall', 'tile', ...etc].
Ruin a book with one letter. First try, Naked Bunch.
Some of the other answers were Night Club, Moby Duck and Sale of Two Cities. I wondered if I could use Uncle Tom's Cabin somehow, but then it occurred to me that it would be fun to have a program that would give all words within n letter distance from some arbitrary word. So, I could enter Cabin and see if there were words within 1 letter distance. I looked online for about 30 seconds, and didn't find it, so that means that no one has ever thought of that, ever.
So, that's my idea. Code a function where ('tale', 1) returns ['sale', 'male', 'pale', 'talk', 'tall', 'tile', ...etc].
It could be a component of a longer program that calculates answers to those problems like 'Turn pain into rail in the shortest number of steps possible, wherein each step you can change one letter in the word. e.g. pain => rain => rail'
Edit (really, 2 minutes later). I was looking for the Levenshtein Distance. I knew that!
Edit. Well, would you look at that: Levenshtein-Automata
Edit. And that. Levenshtein-Puzzle
Edit. Approach would be to find the n=1 distance by inputting the word, say word, and search a dictionary for .ord then w.rd then wo.d then wor. and concatenate the results. Simple. To find n=2 distance, input each of those words into the same function, but being careful to filter out searches that would search the same space. And so on. [Edit. No, that's dumb. To find n=2, just search a dictionary for ..rd then .o.d then .or. then w..d and so on.]
Edit. Okay, done. Also on Github.
I have used it to find these gems:
Edit (really, 2 minutes later). I was looking for the Levenshtein Distance. I knew that!
Edit. Well, would you look at that: Levenshtein-Automata
Edit. And that. Levenshtein-Puzzle
Edit. Approach would be to find the n=1 distance by inputting the word, say word, and search a dictionary for .ord then w.rd then wo.d then wor. and concatenate the results. Simple. To find n=2 distance, input each of those words into the same function, but being careful to filter out searches that would search the same space. And so on. [Edit. No, that's dumb. To find n=2, just search a dictionary for ..rd then .o.d then .or. then w..d and so on.]
Edit. Okay, done. Also on Github.
I have used it to find these gems:
- War and Pence
- A Portrait of the Autist as a Young Man
- Brave Neg World
- The Gripes of Wrath
- The Pun Also Rises
- The Sub Also Rises
- Tropic of Canker
- The Maltose Falcon
- The Catcher in the Eye
- Something Wacked This Way Comes
- Men and the Art of Motorcycle Maintenance
- The Call of the Mild
- Nuked Lunch
- Bridesdead Revisited
- Sometimes a Great Lotion
- American Goys
A fun little project.
Friday, September 1, 2017
Coming along. Framework hesitancy. The Redux Decision.
The Burning Man App is coming along. What started out as a proof of concept playing with the data has gradually moved towards becoming an actual web app. The journey of building it has been taking me through some adventures (if we can quite loosely define 'adventure' here as 'source code' and 'articles about code'). (Also, it's to the point that I'm going to have to choose a name, and I don't want to run afoul of copyrights / trademarks. The Burning Man App is a good working title, but probably not for release.)
A few new things I have learned:
There is a whole ecosystem of virtual DOMs.
There are tons of web application frameworks in active use and development, not just React and the Angulars.
Lighthouse is a thing.
Page Speed Insights is a thing.
Given the nature of the app, I'm probably* going to have to use a framework that supports a virtual DOM. Page Speed Insights reports that it contains over 3,000 elements (stating that less that 1,500 is desirable). This is undoubtedly because of the SVG that comprises the Black Rock City map, which, besides the infrastructure of the city itself, also contains icons representing 308 artworks, 1,299 camps, and 3,374 events!
This will necessarily require some optimization though, because, while desktops and fancier smart phones can handle all that, under-powered phones and tablets crawl. The app doesn't really need to display all of the events at once, either. I can also probably insert icons only as the user zooms in, and only within its viewport.
I'm still not ready to pull the trigger on a full-blown framework, yet. I'm peering closely at Vue.js, but we'll see. Not sure why I'm so reluctant to wade into React. It would be good for my professional development to at least be familiar with it, but as this app is a labor of love, I want any technology to fit with what I want to do, rather than trying to fit the app into some mismatched box. To those of you who already know a framework, including my future self, it may be hard to understand this hesitancy. But remember: learning a framework takes an investment of time and energy; time and energy spent learning a framework is time and energy not spent developing the app; not spent with family and friends; and, at the end of the day, your app is now married to a community and ecosystem that may or may not be a good match. A divorce is costly. Apps also require maintenance, and if it has been some time since using that framework, and the framework you've chosen badly is unintuitive, then you will have to spend time relearning (on some level, this is my reluctance with React: everything - javascript, HTML and even CSS appears to be jumbled all up together in JSX). There are pros, of course, but those are some of the cons.
I have committed to using Redux, however, at least for the user interface. Keeping application state in a single object to which all elements refer makes a lot of sense. The app will be able to store that state, as well, so that the user can return to the app as it was. It will also be much easier to integrate a framework when the time comes.
A few new things I have learned:
There is a whole ecosystem of virtual DOMs.
There are tons of web application frameworks in active use and development, not just React and the Angulars.
Lighthouse is a thing.
Page Speed Insights is a thing.
Given the nature of the app, I'm probably* going to have to use a framework that supports a virtual DOM. Page Speed Insights reports that it contains over 3,000 elements (stating that less that 1,500 is desirable). This is undoubtedly because of the SVG that comprises the Black Rock City map, which, besides the infrastructure of the city itself, also contains icons representing 308 artworks, 1,299 camps, and 3,374 events!
This will necessarily require some optimization though, because, while desktops and fancier smart phones can handle all that, under-powered phones and tablets crawl. The app doesn't really need to display all of the events at once, either. I can also probably insert icons only as the user zooms in, and only within its viewport.
I'm still not ready to pull the trigger on a full-blown framework, yet. I'm peering closely at Vue.js, but we'll see. Not sure why I'm so reluctant to wade into React. It would be good for my professional development to at least be familiar with it, but as this app is a labor of love, I want any technology to fit with what I want to do, rather than trying to fit the app into some mismatched box. To those of you who already know a framework, including my future self, it may be hard to understand this hesitancy. But remember: learning a framework takes an investment of time and energy; time and energy spent learning a framework is time and energy not spent developing the app; not spent with family and friends; and, at the end of the day, your app is now married to a community and ecosystem that may or may not be a good match. A divorce is costly. Apps also require maintenance, and if it has been some time since using that framework, and the framework you've chosen badly is unintuitive, then you will have to spend time relearning (on some level, this is my reluctance with React: everything - javascript, HTML and even CSS appears to be jumbled all up together in JSX). There are pros, of course, but those are some of the cons.
I have committed to using Redux, however, at least for the user interface. Keeping application state in a single object to which all elements refer makes a lot of sense. The app will be able to store that state, as well, so that the user can return to the app as it was. It will also be much easier to integrate a framework when the time comes.
Thursday, August 31, 2017
Idea: benchmark for vdom
What is the tradeoff between using a virtual document object model and straight-forward updating of the dom directly? Surely, an app that inserts a single div in response to user input wouldn't justify the use of a virtual dom . But what is that tipping point? What has to happen in the app in order for a virtual dom to become worth the engineering?
Perhaps a benchmark test could start with comparing two apps. One using a virtual dom, the other not, but otherwise the same. Compare metrics like time difference when inserting one element per cycle. Then ten. Then 100.
This might help:
https://localvoid.github.io/uibench/
Perhaps a benchmark test could start with comparing two apps. One using a virtual dom, the other not, but otherwise the same. Compare metrics like time difference when inserting one element per cycle. Then ten. Then 100.
This might help:
https://localvoid.github.io/uibench/
Saturday, August 26, 2017
CDNs vs. Bundling and a tool to help.
When creating a website, I'm a fan of using content delivery networks (CDNs) to include libraries, over straight-up bundling all code into a single js file. That is to say, I like putting such tags into the heads of my html pages:
<script src="https://d3js.org/d3.v4.min.js"></script>
One ostensible advantage of using CDNs is to reduce bandwidth overall by caching commonly shared code on individual browsers. Rather than, say, each app loading its own version of d3, for instance, the CDN delivers the library once for each user, and each website references that same code. I imagine this would work quite well with a popular library like jQuery. Another advantage is that it allows the developer to quickly try out libraries during development without committing the time to download and configure those assets. If the idea or the library isn't working, it's easy enough to roll it back by simply deleting that <script/> tag.
However, it's a bad idea to rely on CDNs solely in production code. If the CDN is unreachable by the user, then anything that relies on that code will fail. Hosted fallbacks are important: if a src call errors out, then the browser can load that same asset from the website's server, and the user experience is unchanged. It's easy enough to do. Just follow the script with an if-then check: if the library is undefined, then load the locally hosted version. e.g:
<script type="text/javascript"> if (typeof d3== 'undefined') { document.write(unescape("%3Cscript src='/js/d3.v4.min.js type='text/javascript'%3E%3C/script%3E")); } </script>
<script src="https://d3js.org/d3.v4.min.js"></script>
One ostensible advantage of using CDNs is to reduce bandwidth overall by caching commonly shared code on individual browsers. Rather than, say, each app loading its own version of d3, for instance, the CDN delivers the library once for each user, and each website references that same code. I imagine this would work quite well with a popular library like jQuery. Another advantage is that it allows the developer to quickly try out libraries during development without committing the time to download and configure those assets. If the idea or the library isn't working, it's easy enough to roll it back by simply deleting that <script/> tag.
However, it's a bad idea to rely on CDNs solely in production code. If the CDN is unreachable by the user, then anything that relies on that code will fail. Hosted fallbacks are important: if a src call errors out, then the browser can load that same asset from the website's server, and the user experience is unchanged. It's easy enough to do. Just follow the script with an if-then check: if the library is undefined, then load the locally hosted version. e.g:
<script type="text/javascript"> if (typeof d3== 'undefined') { document.write(unescape("%3Cscript src='/js/d3.v4.min.js type='text/javascript'%3E%3C/script%3E")); } </script>
Configuring all of this manually can be a bit of a pain, and error-prone, though. Much easier, and this is the route many devs go, is to simply npm all libraries, roll them all up into a bundle using a task runner, and have a single script tag referencing the entire application in a single js file. And, heck, I can't really disagree: loading-time of an app bundled like that is much less than the time to load lots of individual files. It feels less communitarian to do it that way, as it doesn't take into account the overall time-and-bandwidth savings of using CDNs, but I get it. It's safer.
Still, since I like the CDN approach so much, I wrote a tool to take some of the pain and risk of error out of using them.
Cdnler is a node app that examines the <script/> tags of input html files, downloads those assets locally, and then outputs the html with altered links and fallbacks. Basically, it does all of the above automatically. It's a bit rough right now in the sense that its use case is exactly my own. Within that use case it works perfectly, though. I imagine that it could be useful even to folks who like to bundle their entire app into a locally served script, because it will automatically download all linked assets.
I'm happy to expand its usability for other developers, so if you like the idea, but it's missing some ease or usability or feature that prevents you from adopting it, please let me know what that is.
I'm happy to expand its usability for other developers, so if you like the idea, but it's missing some ease or usability or feature that prevents you from adopting it, please let me know what that is.
Immutability and Bad Code
I don't often watch videos about coding, but I'm on a jag of it, lately. Picked up a few things on the horizon on the way. GraphQL (an alternative to REST) one of them. CSS Grids another. (And, as an aside, a fascinating video called Braids in Higher Dimensions). One video in particular I'd like to highlight is this talk by a fellow named Lee Byron at an event called Full Stack Fest in 2016. It's longish, but if you're a developer on a videos-about-coding jag, too, and it's currently not much past 2017, it might be worth your time.
It's a discussion on immutability, not just in user interfaces, but in the entire server-to-user-and-back cycle (He's essentially talking about React without ever actually mentioning React). But his final words are an articulation of something that I feel many developers, and those who herd them, lose sight of.
It's a discussion on immutability, not just in user interfaces, but in the entire server-to-user-and-back cycle (He's essentially talking about React without ever actually mentioning React). But his final words are an articulation of something that I feel many developers, and those who herd them, lose sight of.
MVC and REST did not solve all of the problems that we face in app development. But what I've presented today, also won't. These ideas are going to be far from flawless, and they might not be right for the app that you're building. And I want all of us to continue to challenge the notion that there is one right way to do things. It's not true.I encounter often an almost religious conviction about various frameworks and approaches as talismans against writing bad code or buggy apps. I have seen people - decent coders, otherwise - unwilling or unable to code without first finding a framework or an approach that other people acknowledge is the right way to do things™. And then, finding this (non-existent) consensus and carefully following all of those guidelines, frameworks and recommendations, they still write overwrought, overengineered, buggy applications that are difficult to maintain.
Sunday, August 20, 2017
Fast & Dirty Minimal Unit Testing
When we coders first encounter the concept of unit testing, it's usually in the context of some kind of instruction: a class or an article of some sort, maybe a video or a podcast. Most often, the instructor will demonstrate the concept of unit testing with the use of a testing framework. In Javascript, that might be QUnit or Mocha. In Java, JUnit. Etc. While a testing framework is fine, configuring it and actually using it can be a bit of a task, especially if you're doing something interesting. Sometimes you just want to code, code, code, not bean count, am I right? You're anointed. You're zooming. Ain't nobody got time for all that!
Friends, I'm here to tell you that you don't need no fancy testing framework to do unit testing! You can test right as you zoom along. Here you go. Here is your essential, no-frills framework:
Use it like this:
Your first argument is a message ("function should return x"); second argument is literally the function calling some value; and the third is the value you expect to get. If the expected value does not match the result, then the console will show an error:
and then modify your unit test function so that doTest checks to see if the result is an array, and then runs the isEqualArray function.
Friends, I'm here to tell you that you don't need no fancy testing framework to do unit testing! You can test right as you zoom along. Here you go. Here is your essential, no-frills framework:
const doTest= (message, result, expected) =>
console.assert(expected == result, message, result);
// port as necessary to the language of your choice.
// or change == to === as you wish.
Use it like this:
doTest("squareIt(10) should be 100 but was ", squareIt(10), 100);
Your first argument is a message ("function should return x"); second argument is literally the function calling some value; and the third is the value you expect to get. If the expected value does not match the result, then the console will show an error:
Assertion failed: squareIt(10) should be 100 but was 101
Simple. You can open your browser console right now to see it in action.
However, it probably needs one more iteration, because this will fail when comparing non-primitive-type objects like
a==b only works if a and b are primitive types. But remember: this is a minimalist framework designed to get you testing without interrupting flow. When you need an upgrade, just go ahead and modify it as you see fit. If you find yourself needing to compare arrays, for instance, go ahead and modify the doTest. One way to solve this is to borrow yourself a fine array comparator such as:
However, it probably needs one more iteration, because this will fail when comparing non-primitive-type objects like
([1,2,3] == [1,2,3]) == false
a==b only works if a and b are primitive types. But remember: this is a minimalist framework designed to get you testing without interrupting flow. When you need an upgrade, just go ahead and modify it as you see fit. If you find yourself needing to compare arrays, for instance, go ahead and modify the doTest. One way to solve this is to borrow yourself a fine array comparator such as:
const isEqualArray = (a, b, i = 0) => a.length != b.length ?
false : i >= a.length ?
true : a[i] == b[i] ? isEqualArray(a, b, i + 1) : false;
and then modify your unit test function so that doTest checks to see if the result is an array, and then runs the isEqualArray function.
const doTest = (message, result, expected) =>
{
const isEqual = (result instanceof Array) ?
isEqualArray(result.sort(), expected.sort()) :
result == expected;
console.assert(isEqual, "\n" + message + "\nReceived:", result);
};
Don't forget to add a unit test to make sure it's working:
doTest("Comparing the same arrays should pass: ", [1,2,3,4],[1,2,3,4]);
Boom! Your framework now compares arrays.
With doTest in place, it's easy to drop a test anywhere meaningful. When you find yourself thinking "This function should be returning x but it's returning y. Hmm." Just doTest("f(x) => y", f(x), y) Then you can get to work solving the problem with a permanent, reliable partner keeping your goal on track. Later, if some fix has broken something critical, you can pin down more easily what part is broken.
doTest('complicatedFunc("What is going on??") should be "Sssh, Pumpkin. It's okay." but is '...
Also, I recommend throwing this one at the head of your list of tests:
As a final note, obviously there are times when you will need a more sophisticated testing framework. But if you find yourself leaving constructing unit tests until the end, give this 'roll your own' approach a try. You can always easily port your tests to another framework whenever necessary.
doTest("This test should always fail.", false, true);This will tell you that your test suite is working as expected and make your heart skip a beat at random moments. Always fun.
As a final note, obviously there are times when you will need a more sophisticated testing framework. But if you find yourself leaving constructing unit tests until the end, give this 'roll your own' approach a try. You can always easily port your tests to another framework whenever necessary.
Tuesday, August 15, 2017
BRC app update
Well, I can say that the 'map' now looks more and more like an 'app'! Click here to see it. It has severe UX issues, but it has a rudimentary search functionality, and some responsiveness. The search bar is in the upper left. Try it out!
Exciting!
Edit: D'oh! It fails in iPad Safari. That's going to take a minute to troubleshoot.
Exciting!
Edit: D'oh! It fails in iPad Safari. That's going to take a minute to troubleshoot.
Sunday, August 13, 2017
BRC map update
Further progress here. Added art and camps with little labels. Zoom in to see them. Also, if you can open the javascript console in your browser (often ctrl-or-option-shift-J), you'll see the data for each marker. It's obviously still a work in progress, but it's coming along. While the map is for 2017, the events and camp locations are from 2016. BMorg will release location data for the current year in about a week.
Friday, August 11, 2017
Black Rock City map update
I finished the SVG map of Black Rock City. Feels good to get that out of the way. Now I have a place to put location data such as art, events and camps, from the Burning Man API. Further, with the javascript in place, it will be easier to make such maps for future years.
Edit: (The original title of this post was Black Rock City map complete) Ah, it's an old truism in software development never to label something as 'complete', 'finished' or 'final'. By 'finished' above, I meant only "I has a map". The project itself is not finished.
I've since added zooming and panning capabilities. To do that, I just grabbed a library (svg-pan-zoom by ariuta)which did the essentials of what I wanted, out of the box.
However, because SVG's coordinate system increases from top to bottom, but real-life GPS increases from bottom to top (i.e. south to north, at Black Rock City, anyway), I had used SVG's transform attribute to reverse the vertical scale. While this allowed an exact mapping between the map's xy coordinates and actual GPS coordinates, this also caused the svg-pan-zoom library's vertical panning to be reversed, as in airplane controls.
In order to get the vertical panning to work properly, I had to do without the easy fix of the reversed vertical transformation and abandon the conceptually convenient x-coordinate = longitude and y-coordinate = latitude scheme. The new GPS => x,y javascript transformation incorporates a vertical flip. It took a while to get it right, but the gain is that the transformation can be any arbitrary size and origin point.
Edit: (The original title of this post was Black Rock City map complete) Ah, it's an old truism in software development never to label something as 'complete', 'finished' or 'final'. By 'finished' above, I meant only "I has a map". The project itself is not finished.
I've since added zooming and panning capabilities. To do that, I just grabbed a library (svg-pan-zoom by ariuta)which did the essentials of what I wanted, out of the box.
However, because SVG's coordinate system increases from top to bottom, but real-life GPS increases from bottom to top (i.e. south to north, at Black Rock City, anyway), I had used SVG's transform attribute to reverse the vertical scale. While this allowed an exact mapping between the map's xy coordinates and actual GPS coordinates, this also caused the svg-pan-zoom library's vertical panning to be reversed, as in airplane controls.
In order to get the vertical panning to work properly, I had to do without the easy fix of the reversed vertical transformation and abandon the conceptually convenient x-coordinate = longitude and y-coordinate = latitude scheme. The new GPS => x,y javascript transformation incorporates a vertical flip. It took a while to get it right, but the gain is that the transformation can be any arbitrary size and origin point.
Thursday, August 10, 2017
BRC, Javascript + SVG
I'm creating a map of Black Rock City in SVG using javascript, so that I can map some of the locational Burning Man API data I've been playing with. It took me quite a lot of time to get to flow on this project. It's not quite finished, but the latest version is here.
It took some thought to translate GPS data with distances in feet, into SVG's xy-origin-at-upper-left coordinates. To make sure that everything is as it should be, for now, real GPS coordinates correspond to actual xy coordinates on the map. e.g. "the golden spike" at 40.78660,-119.20660 in GPS is represented in the SVG graphic as <circle cx="-119.2066" cy="40.7866" r="0.0005" fill="gold"></circle>. Note how the coordinates are identical.
The problem with this is that it's necessary to zoom way, way in, because the distances involved are tiny. For example, the distance from "the golden spike" to one of the pentagon corners is 8,175 feet, but in "GPS units", it's only 0.0293. Just a single foot is 0.000003! The width and height of the whole thing is only 0.7 x 0.6 (in SVG's unit-less distances). Such tiny numbers makes dealing with fonts difficult. But it is pretty neat, to just be able to grab a coordinate and stick it in without modification, so I'll experiment before deciding to scale it up.
Adding to my confusion is that the official GPS coordinates[pdf] for the enclosing pentagon appear to be way off (or my mind is way off). I assumed it was my transformations somehow, which is why I took extra care to make sure that there was a one-to-one correspondence between GPS coordinates and a spot on the SVG map. There are now zero translations or transformations of any coordinates, but the official coordinates make a sad and collapsing pentagon (here shown in blue). Way off, right? It looks like it was crushed!
The really odd thing is that the coordinates match Google maps almost exactly. If you enter those coordinates into Google maps, the marker will land almost precisely on the pentagon corners of last year's satellite photo of Black Rock City. Strange.
If you have any ideas about what happened, here, let me know! I'm still baffled.
Edit: When I calculate the distances to the Man from each of these official pentagon corners, I get these values for each, which are consistent with the visuals. They should all be equal, and they should all be more-or-less 8175. As you can see, they vary from a few hundred feet to over a thousand.
The mystery deepens, but so does my confidence that it's the data and not me!
By contrast, these are the GPS values I'm using, generated by calculating 5 points 72° apart on a 8175 foot radius from the golden spike coordinate. Interestingly, one of the points almost corresponds. The order's not the same, because I generated them widdershins from high noon.
It took some thought to translate GPS data with distances in feet, into SVG's xy-origin-at-upper-left coordinates. To make sure that everything is as it should be, for now, real GPS coordinates correspond to actual xy coordinates on the map. e.g. "the golden spike" at 40.78660,-119.20660 in GPS is represented in the SVG graphic as <circle cx="-119.2066" cy="40.7866" r="0.0005" fill="gold"></circle>. Note how the coordinates are identical.
The problem with this is that it's necessary to zoom way, way in, because the distances involved are tiny. For example, the distance from "the golden spike" to one of the pentagon corners is 8,175 feet, but in "GPS units", it's only 0.0293. Just a single foot is 0.000003! The width and height of the whole thing is only 0.7 x 0.6 (in SVG's unit-less distances). Such tiny numbers makes dealing with fonts difficult. But it is pretty neat, to just be able to grab a coordinate and stick it in without modification, so I'll experiment before deciding to scale it up.
Adding to my confusion is that the official GPS coordinates[pdf] for the enclosing pentagon appear to be way off (or my mind is way off). I assumed it was my transformations somehow, which is why I took extra care to make sure that there was a one-to-one correspondence between GPS coordinates and a spot on the SVG map. There are now zero translations or transformations of any coordinates, but the official coordinates make a sad and collapsing pentagon (here shown in blue). Way off, right? It looks like it was crushed!
The really odd thing is that the coordinates match Google maps almost exactly. If you enter those coordinates into Google maps, the marker will land almost precisely on the pentagon corners of last year's satellite photo of Black Rock City. Strange.
If you have any ideas about what happened, here, let me know! I'm still baffled.
Edit: When I calculate the distances to the Man from each of these official pentagon corners, I get these values for each, which are consistent with the visuals. They should all be equal, and they should all be more-or-less 8175. As you can see, they vary from a few hundred feet to over a thousand.
BM Official Pentagon Coordinates | |||
---|---|---|---|
GPS coordinates | Distance to Man | ||
Latitude | Longitude | GPS unit | feet |
[40.78306, | -119.23568] | 0.0293 | 8175 |
[40.80652, | -119.22006] | 0.0240 | 6709.0 |
[40.80247, | -119.18581] | 0.0262 | 7298.8 |
[40.77657, | -119.18026] | 0.0282 | 7865.3 |
[40.76448, | -119.21119] | 0.0226 | 6304.3 |
The mystery deepens, but so does my confidence that it's the data and not me!
By contrast, these are the GPS values I'm using, generated by calculating 5 points 72° apart on a 8175 foot radius from the golden spike coordinate. Interestingly, one of the points almost corresponds. The order's not the same, because I generated them widdershins from high noon.
My "fuck it!" coordinates | |||
---|---|---|---|
GPS coordinates | Distance to Man | ||
Latitude | Longitude | GPS unit | feet |
[40.8073, | -119.1859] | 0.0293 | 8175.0 |
[40.8127, | -119.2199] | 0.0293 | 8175.0 |
[40.7820, | -119.2355] | 0.0293 | 8175.0 |
[40.7577, | -119.2112] | 0.0293 | 8175.0 |
[40.7733, | -119.1805] | 0.0293 | 8175.0 |
Again, you can see the two pentagons here: http://rendall.tv/bm/v002/map.html
Tuesday, August 8, 2017
BM API + dataviz
I'm having fun playing with the Burning Man API and d3. My noodling is here: http://rendall.tv/bm/v001/
I don't know if I will ever set it up to be an actual, full, proper app, but for now it's fun to see pie charts of the types of scheduled events, or bubble charts of theme camp surface area, or bar charts of the number of events each theme camp is hosting. I'm using Ramda.js to keep the data flow reasonable. As I develop it, I will likely use Rxjs to make interaction states easier to reason about.
One interesting challenge:
I'd like to get data with respect to hometowns of the various theme camps. Data for a theme camp comes in this kind of JSON object, for example:
{"uid": "a1X0V000002rka2UAA",
"year": 2017,
"name": "Houphoria",
"url": null,
"contact_email": null,
"hometown": "Houston",
"description": "The purpose of Houphoria is to provide multiple spaces for unique interactions to explore all of our senses. We are a collective of Houston based makers, visual artists, musicians, and DJs coming together and creating a environment for participants to come and interact with us and our creations. By day, we're a well shaded coffee lounge and bar, but at night is when our space really comes alive with vibrant lighting, sounds, and projections." }
As you can see, the hometown for the camp Houphoria is Houston. Most hometowns are pretty clear:
San Francisco
Los Angeles
Unknown
Reno
Oakland
Seattle
Portland
San Diego
However, others are not so clear:
- Some hometown fields contain several locales, and this concept is expressed in various ways:
- Vancouver/Louisville
- San Frandencouver
- Woodacre-Seattle
- SD-OC-LA-SF
- Portland & Bend
- San Fran and LA
- New York, Dubai, San Francisco, Hong Kong
- Woodstock NY to Arcata CA, Shanghai/Bahrain/Norfolk VA
- There are several ways to express the same city. Including misspells, San Francisco is expressed variously as:
- SF
- San Francisco
- Bay Area
- SF Bay Area
- San Francisco bay area
- Bay Area, CA
- San Francisco and the surrounding Bay Area
- San Fra
- San Fran
- San Fransisco
- San Franscisco
- Some camps add extraneous information:
- Reno, Nevada (mostly), with a couple from Salt Lake City, Utah, and California
- Ibiza. This is were the founding members of the camp met, most people are from different countries but Ibiza is the root
- Nikola-Lenivets, GPS 54.75782,35.60123
e.g: ["SF", "Bay Area", "SF Bay Area", "San Francisco bay area", "Bay Area, CA", "San Francisco and the surrounding Bay Area", "San Fra", "San Fran", "San Fransisco", "San Franscisco"] => "San Francisco")
At this point, though, I feel like answering this challenge would cease flow on more interesting challenges.
So why not simply leave it? In other words, why not simply treat the camp with the hometown of Los Angeles, San Francisco, New York as in a distinct bucket from San Francisco, Los Angeles, New York?
Well, that's what I'm doing, for now, but it feels inelegant. If the data will impart any insights with respect to understanding where camps are coming from, it will need to be done.
But, this project is about exploring the d3 framework in a fun way, not getting bogged down in implementation. So, moving on.
Saturday, August 5, 2017
An American's Guide to Attending a Finnish Funeral
My friend's elderly aunt died recently, and today we went to her funeral. There are a few differences between funerals in the US and in Finland, so if you do ever find yourself going to a funeral in Finland, here are a few tips.
Keep in mind, though, that Finns are a gracious and laid-back people, so if you find that you have been unintentionally weird and foreign, they want you to feel at ease. "Ei se mitään", they say. It's nothing. It is more important that everyone feel comfortable than strictly adhering to protocol.
Keep in mind, though, that Finns are a gracious and laid-back people, so if you find that you have been unintentionally weird and foreign, they want you to feel at ease. "Ei se mitään", they say. It's nothing. It is more important that everyone feel comfortable than strictly adhering to protocol.
Tip #1, is that pall bearing is no joke. In both countries, it is as close to a sacred duty as you will come, of course. But in Finland, there is no room for error, no tricks. You will carry that coffin, and you will put it in the ground. If you cannot carry heavy weight, if you even have doubt about it, do not pick it up. You seriously risk disaster. No, nothing bad happened at my friend's aunt's funeral, but there was no backup if even one of us pallbearers failed.
The second is that, in the US, there has been a shift away from solid black dress. Patterns are ok, hues are ok. As long as your clothes are not outright festive, you will be okay. In Finland, no. Dress only in solid black without pattern. If you are a man, wear a white shirt with your black suit. It is important that your outfit not be fancy. Do not demonstrate your sartorial sophistication. You do not want to stand out.
The third difference is that Americans smile a lot. Even at funerals, wakes, memorials and such. Lock that down. Finns smile less than Americans in general, and at funerals in particular. Do not wear a brave or comforting smile. It will come off as weird.
The fourth tip: bring a change of more casual clothes. It is possible that you may be invited to someone's home afterwards, and they will likely immediately change out of their funeral wear. You will feel more comfortable, yourself, if you can.
The fourth tip: bring a change of more casual clothes. It is possible that you may be invited to someone's home afterwards, and they will likely immediately change out of their funeral wear. You will feel more comfortable, yourself, if you can.
Friday, August 4, 2017
Two random ideas
- My life events and career as a json file. Then it can be the data for all sorts of presentations and filters, including a resume.
- Burning Man events as a timeline map, using these two as mashups
- This d3 sketch: http://bl.ocks.org/cmdoptesc/fc0e318ce7992bed7ca8
- This BM API https://api.burningman.org/
Impolished
I'm not really a fellow who has documented his tasks and accomplishments for posterity or others. Nor have I felt comfortable talking about real-life events on the Internet. I'm too much of a perfectionist. Flaws in grammar or execution gall me. What if I express an opinion that dates with time?
But I've decided to toss those kinds of concerns, and just write. I would rather you know who I am and what I think, even if the thoughts are sketchy or incomplete. No one is fooled by a polished exterior.
One technical approach to one bilingual website.
I am updating my website, and I want it to be bilingual, supporting Finnish.
The technical solution I'm thinking to employ will be to serve the same page irrespective of language, but alter its content on the backend before serving it. Since the Finnish version will be hosted at fi.rendall.tv, but it essentially just points to rendall.tv it's a simple matter to see if the request URI is prepended with fi, and use that information as the "use the Finnish version" flag. Otherwise, just serve whatever page is asked for by default.
As an overview, to serve the Finnish version, I will most often just take the English version, and swap out each instance of English content and replace it with a Finnish language translation. On occasion it might be necessary to present an entirely uniquely formatted page for the Finnish version. And, of course, if there is no Finnish language version, I'll fall back to English.
So, in some detail, every html request to rendall.tv will be received by a controller that first checks to see if the request URI has a 'fi' at the beginning. If not, it'll just pass through the request. If there is a fi in the URI, the controller will first check to see if there is a Finnish-language version of that page: the same filename, but with .fi appended to the filename (e.g. fi.rendall.tv/index.html request will look for index.fi.html), and if so, will serve that directly (as index.html). That covers the comparatively rare case of complete Finnish re-formats.
If there is not a full Finnish version of that page, the controller will then look for a similarly named static file (e.g. index.fi.txt) that will map a bunch of key-names to translation values.
eg: "howdy" : "Ohoi!"
This static file will be used to alter the requested file and present a Finnish version. HTML tags with the attribute data-translation="<key-name>" get their inner HTML replaced by <translation-value> in accordance with the mapping.
For instance,
<span class="display" data-translation="howdy">Howdy!</span>
magically transforms into
<span class="display" data-translation="howdy">Ohoi!</span>
It's important that there be an option to alter the actual inner HTML, not only the text.
The sentence "Welcome to the personal and professional website of Rendall Koski" must have for stylistic reasons a nbsp; between my first and last name. The font I'm using for the welcome banner, Reenie-Beanie, does not have a hard space. That is to say, nbsp; renders with no space at all. Reenie-Beanie renders Rendallnbsp;Koski as RendallKoski, not Rendall Koski. The fix is to wrap the nbsp; in a <span> with a .nbsp-fix class that has a different font-family (that does render nbsp;) like so:
Rendall<span class="nbsp-fix">nbps;</span>Koski
Now my name renders properly, and won't break apart. Problem solved. For English.
But! The translation for "Welcome to the personal and professional website of Rendall Koski" in Finnish is "Tervetuloa Rendall Koskin persoonallinen ja ammatillinen verkkosivuun"*. If my scheme only allows for straight text translations and no HTML, swapping this in would blow away my span.nbsp-fix above, and then I'm back to having either no nbsp; or stupid-looking RendallKoskin.
Therefore the translation scheme should also cover HTML, so that this mapping
"welcome": 'Tervetuloa Rendall<span class="nbsp-fix"> </span>Koskin persoonallinen ja ammatillinen verkkosivuun!'
tranforms the line: Welcome to the personal and professional website of Rendall<span class="nbsp-fix"> </span>Koski
properly into this line:
Tervetuloa Rendall<span class="nbsp-fix"> </span>Koskin persoonallinen ja ammatillinen verkkosivuun!
* I have not yet verified these translations with a native Finnish speaker.
Tervetuloa Rendall<span class="nbsp-fix"> </span>Koskin persoonallinen ja ammatillinen verkkosivuun!
* I have not yet verified these translations with a native Finnish speaker.
Thursday, August 3, 2017
The problems and future of social media
Facebook brings so many wonderful things to the world: connection to long-unseen acquaintances; community organizing; updates from friends and family. Facebook is catholic, so to speak.
The cost, of course, is placing our thoughts, ideas, memories, pictures, opinions and habits into a closed, for-profit ecosystem with dubious ethics and the tendency to meddle with the presentation of that content (and the content itself) like a scolding, self-serving parent. Their algorithm for content curation is opaque, and designed, at best, to maximize their own advertising revenue.
Attempts to remake Facebook as another website will fail. Logging into a website held on private servers, in order to see or share or participate, exactly replicates the issues that Facebook has (e.g. 'Ello), no matter how ethically its owners promise to behave. Control and presentation of your content, of your identity, of your opinion, is still in the hands of other people.
The only way to solve the problems of these closed gardens like Facebook is to give every individual total control over their own content, and over the content feed they see. I'm talking about a paradigm shift in social media. I'm not talking website, I'm talking protocol. The web, exactly as we have it now, with our websites and blogs and such; but we subscribe to them, and receive them when and how we want them.
So, my pitch is: Existing technologies such as RSS, open IDs and authentication, blogging software, etc, can, glued together with the right protocol TBD, will allow individuals to have more control over their own data and identity, while still allowing all of the good that Facebook and other social media actually bring to the world. Individuals host their own equivalent of the "Facebook wall" (or hire a host), and can subscribe to other "walls". Posting on your "wall" can have different levels of security - available to trusted audience or public, rather like Facebook is. Only, you own it. A free, open-source protocol that emphasizes interoperability and user control would allow devs to create any kind of client that suits them. Competing clients, and competing hosts, with competing revenue models, will lead to all kinds of UX solutions, not just the one handed to you by the hegemony. Those who hate scrolling such as yourself will create and gravitate towards clients that solve that.
This is obviously only a roughly sketched idea. I actually see this future - of a distributed social media - as rather inevitable, whether or not I work on it. The more that social media becomes essential to our lives, the less willing we will be to settle for our social networks and identities to be controlled by others.
Some related tech:
FOAF
Scuttlebutt
Matrix
The cost, of course, is placing our thoughts, ideas, memories, pictures, opinions and habits into a closed, for-profit ecosystem with dubious ethics and the tendency to meddle with the presentation of that content (and the content itself) like a scolding, self-serving parent. Their algorithm for content curation is opaque, and designed, at best, to maximize their own advertising revenue.
Attempts to remake Facebook as another website will fail. Logging into a website held on private servers, in order to see or share or participate, exactly replicates the issues that Facebook has (e.g. 'Ello), no matter how ethically its owners promise to behave. Control and presentation of your content, of your identity, of your opinion, is still in the hands of other people.
The only way to solve the problems of these closed gardens like Facebook is to give every individual total control over their own content, and over the content feed they see. I'm talking about a paradigm shift in social media. I'm not talking website, I'm talking protocol. The web, exactly as we have it now, with our websites and blogs and such; but we subscribe to them, and receive them when and how we want them.
So, my pitch is: Existing technologies such as RSS, open IDs and authentication, blogging software, etc, can, glued together with the right protocol TBD, will allow individuals to have more control over their own data and identity, while still allowing all of the good that Facebook and other social media actually bring to the world. Individuals host their own equivalent of the "Facebook wall" (or hire a host), and can subscribe to other "walls". Posting on your "wall" can have different levels of security - available to trusted audience or public, rather like Facebook is. Only, you own it. A free, open-source protocol that emphasizes interoperability and user control would allow devs to create any kind of client that suits them. Competing clients, and competing hosts, with competing revenue models, will lead to all kinds of UX solutions, not just the one handed to you by the hegemony. Those who hate scrolling such as yourself will create and gravitate towards clients that solve that.
This is obviously only a roughly sketched idea. I actually see this future - of a distributed social media - as rather inevitable, whether or not I work on it. The more that social media becomes essential to our lives, the less willing we will be to settle for our social networks and identities to be controlled by others.
Some related tech:
FOAF
Scuttlebutt
Matrix
Subscribe to:
Posts (Atom)