Showing posts with label idea. Show all posts
Showing posts with label idea. Show all posts
Wednesday, May 16, 2018
Thursday, September 14, 2017
Baby perspective on adults
Adults probably seem kind of like monsters to small children. Besides the obvious things, like that we are large and talk in loud, deep voices; we also do weird, dangerous things in the kitchen like boil water and cook food, which we then eat as hot as we can handle it. Hot coffee, hot soup, food with molten cheese toppings and such probably seem like monster food to babies; which we gobble up with huge bites.
But for that matter, babies seem like weird little creatures to older children. Especially newborns who can't do much except cry and just lie there like blobs. Adults are always boiling their bottles, wrapping them delicately in sterilized fabrics and such. And they smell bad quite often.
Like adults, really, if we're being honest.
Friday, September 8, 2017
CDNs vs. bundling
Remember, this is a personal sketchbook, not a how-to guide nor really even a blog. So I might have some bad ideas on here. Might even be terrible. That's ok.
I created cdnler to help me with a workflow of using Typescript, but without necessarily loading modules through npm. If I were to use Ramda to build a site, I would load (only) Ramda's type declarations using npm, so that its classes and methods would be available to my development environment. But on the web page itself, I'd just use a script call to a CDN. Any project I'm working on might have 5 or 6 CDN calls on one page, and since it's best practice to use CDNs only with a local fallback, for production I'd have to download every CDN asset file by hand, and then hand-code its fallback. cdnler just does that.
Which is fine as far as that goes, but the fad these days, backed by the power of Google, is to bundle up all of your javascript into a single, or two, large, minimized files, and then link to those from within your page. This has the advantage of reducing server calls, since a script reference blocks any other browser activity until that call is resolved. CDNs are another way of handling this bandwidth issue, since in theory, a CDN asset would be cached on the browser, and presumably shared when the same browser visits a different site with that same asset, but, to be honest, that's a bit dubious. Considering, jQuery alone, there are so many different versions, and so many different CDNs that host it, that even sites that use jQuery, and a CDN to reference it, would not necessarily be caching the same files. As well, Google now wants your site to be a progressive app, which makes CDN references problematic. Not impossible, but the tide is against that method.
The expected way to bundle javascript code is to load of external libraries locally using a package manager like npm or yarn. Then, in the code, to require them using node.js syntax e.g.
import R = require('ramda');
and then to use a build tool like browserify or rollup which work by following all those require calls up the abstract syntax tree to know what libraries comprise the app, mushes them all into one file, which is what then goes into the <script> src reference.
Here's where my bad idea comes in. Why not build a companion module to cdnler, that would follow all of the local script references in an html file (<script src="local.js">), then bundle and minimize those local assets? Then it could replace those script references in the html file with a single script reference to the bundled asset. It wouldn't need to do any so-called treeshaking, just put the contents of each file in just a long file of its own.
To recap, this hypothetical work-flow would be to reference external libraries using CDNs during development, and then download them locally during pre-build, then bundle them into a single (or more) local file during build, and then replace the CDN script references in the html with a single a reference to that local file.
What would be the point? I just like working with CDNs. There's something neat about having the world's javascript libraries available to just a single flat index.html. And there's something hegemonically monocultural about using npm to manage all of everyone's code. If there's ever a problem with npm, we're all in trouble. So, it's a good thing, I think, to have a variety of ways to build and create apps.
I created cdnler to help me with a workflow of using Typescript, but without necessarily loading modules through npm. If I were to use Ramda to build a site, I would load (only) Ramda's type declarations using npm, so that its classes and methods would be available to my development environment. But on the web page itself, I'd just use a script call to a CDN. Any project I'm working on might have 5 or 6 CDN calls on one page, and since it's best practice to use CDNs only with a local fallback, for production I'd have to download every CDN asset file by hand, and then hand-code its fallback. cdnler just does that.
Which is fine as far as that goes, but the fad these days, backed by the power of Google, is to bundle up all of your javascript into a single, or two, large, minimized files, and then link to those from within your page. This has the advantage of reducing server calls, since a script reference blocks any other browser activity until that call is resolved. CDNs are another way of handling this bandwidth issue, since in theory, a CDN asset would be cached on the browser, and presumably shared when the same browser visits a different site with that same asset, but, to be honest, that's a bit dubious. Considering, jQuery alone, there are so many different versions, and so many different CDNs that host it, that even sites that use jQuery, and a CDN to reference it, would not necessarily be caching the same files. As well, Google now wants your site to be a progressive app, which makes CDN references problematic. Not impossible, but the tide is against that method.
The expected way to bundle javascript code is to load of external libraries locally using a package manager like npm or yarn. Then, in the code, to require them using node.js syntax e.g.
import R = require('ramda');
and then to use a build tool like browserify or rollup which work by following all those require calls up the abstract syntax tree to know what libraries comprise the app, mushes them all into one file, which is what then goes into the <script> src reference.
Here's where my bad idea comes in. Why not build a companion module to cdnler, that would follow all of the local script references in an html file (<script src="local.js">), then bundle and minimize those local assets? Then it could replace those script references in the html file with a single script reference to the bundled asset. It wouldn't need to do any so-called treeshaking, just put the contents of each file in just a long file of its own.
To recap, this hypothetical work-flow would be to reference external libraries using CDNs during development, and then download them locally during pre-build, then bundle them into a single (or more) local file during build, and then replace the CDN script references in the html with a single a reference to that local file.
What would be the point? I just like working with CDNs. There's something neat about having the world's javascript libraries available to just a single flat index.html. And there's something hegemonically monocultural about using npm to manage all of everyone's code. If there's ever a problem with npm, we're all in trouble. So, it's a good thing, I think, to have a variety of ways to build and create apps.
Saturday, September 2, 2017
Word distance calculator
A friend on Facebook posted this:
Some of the other answers were Night Club, Moby Duck and Sale of Two Cities. I wondered if I could use Uncle Tom's Cabin somehow, but then it occurred to me that it would be fun to have a program that would give all words within n letter distance from some arbitrary word. So, I could enter Cabin and see if there were words within 1 letter distance. I looked online for about 30 seconds, and didn't find it, so that means that no one has ever thought of that, ever.
So, that's my idea. Code a function where ('tale', 1) returns ['sale', 'male', 'pale', 'talk', 'tall', 'tile', ...etc].
Ruin a book with one letter. First try, Naked Bunch.
Some of the other answers were Night Club, Moby Duck and Sale of Two Cities. I wondered if I could use Uncle Tom's Cabin somehow, but then it occurred to me that it would be fun to have a program that would give all words within n letter distance from some arbitrary word. So, I could enter Cabin and see if there were words within 1 letter distance. I looked online for about 30 seconds, and didn't find it, so that means that no one has ever thought of that, ever.
So, that's my idea. Code a function where ('tale', 1) returns ['sale', 'male', 'pale', 'talk', 'tall', 'tile', ...etc].
It could be a component of a longer program that calculates answers to those problems like 'Turn pain into rail in the shortest number of steps possible, wherein each step you can change one letter in the word. e.g. pain => rain => rail'
Edit (really, 2 minutes later). I was looking for the Levenshtein Distance. I knew that!
Edit. Well, would you look at that: Levenshtein-Automata
Edit. And that. Levenshtein-Puzzle
Edit. Approach would be to find the n=1 distance by inputting the word, say word, and search a dictionary for .ord then w.rd then wo.d then wor. and concatenate the results. Simple. To find n=2 distance, input each of those words into the same function, but being careful to filter out searches that would search the same space. And so on. [Edit. No, that's dumb. To find n=2, just search a dictionary for ..rd then .o.d then .or. then w..d and so on.]
Edit. Okay, done. Also on Github.
I have used it to find these gems:
Edit (really, 2 minutes later). I was looking for the Levenshtein Distance. I knew that!
Edit. Well, would you look at that: Levenshtein-Automata
Edit. And that. Levenshtein-Puzzle
Edit. Approach would be to find the n=1 distance by inputting the word, say word, and search a dictionary for .ord then w.rd then wo.d then wor. and concatenate the results. Simple. To find n=2 distance, input each of those words into the same function, but being careful to filter out searches that would search the same space. And so on. [Edit. No, that's dumb. To find n=2, just search a dictionary for ..rd then .o.d then .or. then w..d and so on.]
Edit. Okay, done. Also on Github.
I have used it to find these gems:
- War and Pence
- A Portrait of the Autist as a Young Man
- Brave Neg World
- The Gripes of Wrath
- The Pun Also Rises
- The Sub Also Rises
- Tropic of Canker
- The Maltose Falcon
- The Catcher in the Eye
- Something Wacked This Way Comes
- Men and the Art of Motorcycle Maintenance
- The Call of the Mild
- Nuked Lunch
- Bridesdead Revisited
- Sometimes a Great Lotion
- American Goys
A fun little project.
Thursday, August 31, 2017
Idea: benchmark for vdom
What is the tradeoff between using a virtual document object model and straight-forward updating of the dom directly? Surely, an app that inserts a single div in response to user input wouldn't justify the use of a virtual dom . But what is that tipping point? What has to happen in the app in order for a virtual dom to become worth the engineering?
Perhaps a benchmark test could start with comparing two apps. One using a virtual dom, the other not, but otherwise the same. Compare metrics like time difference when inserting one element per cycle. Then ten. Then 100.
This might help:
https://localvoid.github.io/uibench/
Perhaps a benchmark test could start with comparing two apps. One using a virtual dom, the other not, but otherwise the same. Compare metrics like time difference when inserting one element per cycle. Then ten. Then 100.
This might help:
https://localvoid.github.io/uibench/
Tuesday, August 8, 2017
BM API + dataviz
I'm having fun playing with the Burning Man API and d3. My noodling is here: http://rendall.tv/bm/v001/
I don't know if I will ever set it up to be an actual, full, proper app, but for now it's fun to see pie charts of the types of scheduled events, or bubble charts of theme camp surface area, or bar charts of the number of events each theme camp is hosting. I'm using Ramda.js to keep the data flow reasonable. As I develop it, I will likely use Rxjs to make interaction states easier to reason about.
One interesting challenge:
I'd like to get data with respect to hometowns of the various theme camps. Data for a theme camp comes in this kind of JSON object, for example:
{"uid": "a1X0V000002rka2UAA",
"year": 2017,
"name": "Houphoria",
"url": null,
"contact_email": null,
"hometown": "Houston",
"description": "The purpose of Houphoria is to provide multiple spaces for unique interactions to explore all of our senses. We are a collective of Houston based makers, visual artists, musicians, and DJs coming together and creating a environment for participants to come and interact with us and our creations. By day, we're a well shaded coffee lounge and bar, but at night is when our space really comes alive with vibrant lighting, sounds, and projections." }
As you can see, the hometown for the camp Houphoria is Houston. Most hometowns are pretty clear:
San Francisco
Los Angeles
Unknown
Reno
Oakland
Seattle
Portland
San Diego
However, others are not so clear:
- Some hometown fields contain several locales, and this concept is expressed in various ways:
- Vancouver/Louisville
- San Frandencouver
- Woodacre-Seattle
- SD-OC-LA-SF
- Portland & Bend
- San Fran and LA
- New York, Dubai, San Francisco, Hong Kong
- Woodstock NY to Arcata CA, Shanghai/Bahrain/Norfolk VA
- There are several ways to express the same city. Including misspells, San Francisco is expressed variously as:
- SF
- San Francisco
- Bay Area
- SF Bay Area
- San Francisco bay area
- Bay Area, CA
- San Francisco and the surrounding Bay Area
- San Fra
- San Fran
- San Fransisco
- San Franscisco
- Some camps add extraneous information:
- Reno, Nevada (mostly), with a couple from Salt Lake City, Utah, and California
- Ibiza. This is were the founding members of the camp met, most people are from different countries but Ibiza is the root
- Nikola-Lenivets, GPS 54.75782,35.60123
e.g: ["SF", "Bay Area", "SF Bay Area", "San Francisco bay area", "Bay Area, CA", "San Francisco and the surrounding Bay Area", "San Fra", "San Fran", "San Fransisco", "San Franscisco"] => "San Francisco")
At this point, though, I feel like answering this challenge would cease flow on more interesting challenges.
So why not simply leave it? In other words, why not simply treat the camp with the hometown of Los Angeles, San Francisco, New York as in a distinct bucket from San Francisco, Los Angeles, New York?
Well, that's what I'm doing, for now, but it feels inelegant. If the data will impart any insights with respect to understanding where camps are coming from, it will need to be done.
But, this project is about exploring the d3 framework in a fun way, not getting bogged down in implementation. So, moving on.
Friday, August 4, 2017
Two random ideas
- My life events and career as a json file. Then it can be the data for all sorts of presentations and filters, including a resume.
- Burning Man events as a timeline map, using these two as mashups
- This d3 sketch: http://bl.ocks.org/cmdoptesc/fc0e318ce7992bed7ca8
- This BM API https://api.burningman.org/
One technical approach to one bilingual website.
I am updating my website, and I want it to be bilingual, supporting Finnish.
The technical solution I'm thinking to employ will be to serve the same page irrespective of language, but alter its content on the backend before serving it. Since the Finnish version will be hosted at fi.rendall.tv, but it essentially just points to rendall.tv it's a simple matter to see if the request URI is prepended with fi, and use that information as the "use the Finnish version" flag. Otherwise, just serve whatever page is asked for by default.
As an overview, to serve the Finnish version, I will most often just take the English version, and swap out each instance of English content and replace it with a Finnish language translation. On occasion it might be necessary to present an entirely uniquely formatted page for the Finnish version. And, of course, if there is no Finnish language version, I'll fall back to English.
So, in some detail, every html request to rendall.tv will be received by a controller that first checks to see if the request URI has a 'fi' at the beginning. If not, it'll just pass through the request. If there is a fi in the URI, the controller will first check to see if there is a Finnish-language version of that page: the same filename, but with .fi appended to the filename (e.g. fi.rendall.tv/index.html request will look for index.fi.html), and if so, will serve that directly (as index.html). That covers the comparatively rare case of complete Finnish re-formats.
If there is not a full Finnish version of that page, the controller will then look for a similarly named static file (e.g. index.fi.txt) that will map a bunch of key-names to translation values.
eg: "howdy" : "Ohoi!"
This static file will be used to alter the requested file and present a Finnish version. HTML tags with the attribute data-translation="<key-name>" get their inner HTML replaced by <translation-value> in accordance with the mapping.
For instance,
<span class="display" data-translation="howdy">Howdy!</span>
magically transforms into
<span class="display" data-translation="howdy">Ohoi!</span>
It's important that there be an option to alter the actual inner HTML, not only the text.
The sentence "Welcome to the personal and professional website of Rendall Koski" must have for stylistic reasons a nbsp; between my first and last name. The font I'm using for the welcome banner, Reenie-Beanie, does not have a hard space. That is to say, nbsp; renders with no space at all. Reenie-Beanie renders Rendallnbsp;Koski as RendallKoski, not Rendall Koski. The fix is to wrap the nbsp; in a <span> with a .nbsp-fix class that has a different font-family (that does render nbsp;) like so:
Rendall<span class="nbsp-fix">nbps;</span>Koski
Now my name renders properly, and won't break apart. Problem solved. For English.
But! The translation for "Welcome to the personal and professional website of Rendall Koski" in Finnish is "Tervetuloa Rendall Koskin persoonallinen ja ammatillinen verkkosivuun"*. If my scheme only allows for straight text translations and no HTML, swapping this in would blow away my span.nbsp-fix above, and then I'm back to having either no nbsp; or stupid-looking RendallKoskin.
Therefore the translation scheme should also cover HTML, so that this mapping
"welcome": 'Tervetuloa Rendall<span class="nbsp-fix"> </span>Koskin persoonallinen ja ammatillinen verkkosivuun!'
tranforms the line: Welcome to the personal and professional website of Rendall<span class="nbsp-fix"> </span>Koski
properly into this line:
Tervetuloa Rendall<span class="nbsp-fix"> </span>Koskin persoonallinen ja ammatillinen verkkosivuun!
* I have not yet verified these translations with a native Finnish speaker.
Tervetuloa Rendall<span class="nbsp-fix"> </span>Koskin persoonallinen ja ammatillinen verkkosivuun!
* I have not yet verified these translations with a native Finnish speaker.
Thursday, August 3, 2017
The problems and future of social media
Facebook brings so many wonderful things to the world: connection to long-unseen acquaintances; community organizing; updates from friends and family. Facebook is catholic, so to speak.
The cost, of course, is placing our thoughts, ideas, memories, pictures, opinions and habits into a closed, for-profit ecosystem with dubious ethics and the tendency to meddle with the presentation of that content (and the content itself) like a scolding, self-serving parent. Their algorithm for content curation is opaque, and designed, at best, to maximize their own advertising revenue.
Attempts to remake Facebook as another website will fail. Logging into a website held on private servers, in order to see or share or participate, exactly replicates the issues that Facebook has (e.g. 'Ello), no matter how ethically its owners promise to behave. Control and presentation of your content, of your identity, of your opinion, is still in the hands of other people.
The only way to solve the problems of these closed gardens like Facebook is to give every individual total control over their own content, and over the content feed they see. I'm talking about a paradigm shift in social media. I'm not talking website, I'm talking protocol. The web, exactly as we have it now, with our websites and blogs and such; but we subscribe to them, and receive them when and how we want them.
So, my pitch is: Existing technologies such as RSS, open IDs and authentication, blogging software, etc, can, glued together with the right protocol TBD, will allow individuals to have more control over their own data and identity, while still allowing all of the good that Facebook and other social media actually bring to the world. Individuals host their own equivalent of the "Facebook wall" (or hire a host), and can subscribe to other "walls". Posting on your "wall" can have different levels of security - available to trusted audience or public, rather like Facebook is. Only, you own it. A free, open-source protocol that emphasizes interoperability and user control would allow devs to create any kind of client that suits them. Competing clients, and competing hosts, with competing revenue models, will lead to all kinds of UX solutions, not just the one handed to you by the hegemony. Those who hate scrolling such as yourself will create and gravitate towards clients that solve that.
This is obviously only a roughly sketched idea. I actually see this future - of a distributed social media - as rather inevitable, whether or not I work on it. The more that social media becomes essential to our lives, the less willing we will be to settle for our social networks and identities to be controlled by others.
Some related tech:
FOAF
Scuttlebutt
Matrix
The cost, of course, is placing our thoughts, ideas, memories, pictures, opinions and habits into a closed, for-profit ecosystem with dubious ethics and the tendency to meddle with the presentation of that content (and the content itself) like a scolding, self-serving parent. Their algorithm for content curation is opaque, and designed, at best, to maximize their own advertising revenue.
Attempts to remake Facebook as another website will fail. Logging into a website held on private servers, in order to see or share or participate, exactly replicates the issues that Facebook has (e.g. 'Ello), no matter how ethically its owners promise to behave. Control and presentation of your content, of your identity, of your opinion, is still in the hands of other people.
The only way to solve the problems of these closed gardens like Facebook is to give every individual total control over their own content, and over the content feed they see. I'm talking about a paradigm shift in social media. I'm not talking website, I'm talking protocol. The web, exactly as we have it now, with our websites and blogs and such; but we subscribe to them, and receive them when and how we want them.
So, my pitch is: Existing technologies such as RSS, open IDs and authentication, blogging software, etc, can, glued together with the right protocol TBD, will allow individuals to have more control over their own data and identity, while still allowing all of the good that Facebook and other social media actually bring to the world. Individuals host their own equivalent of the "Facebook wall" (or hire a host), and can subscribe to other "walls". Posting on your "wall" can have different levels of security - available to trusted audience or public, rather like Facebook is. Only, you own it. A free, open-source protocol that emphasizes interoperability and user control would allow devs to create any kind of client that suits them. Competing clients, and competing hosts, with competing revenue models, will lead to all kinds of UX solutions, not just the one handed to you by the hegemony. Those who hate scrolling such as yourself will create and gravitate towards clients that solve that.
This is obviously only a roughly sketched idea. I actually see this future - of a distributed social media - as rather inevitable, whether or not I work on it. The more that social media becomes essential to our lives, the less willing we will be to settle for our social networks and identities to be controlled by others.
Some related tech:
FOAF
Scuttlebutt
Matrix
Subscribe to:
Posts (Atom)