I spend way too much time on Hacker News. It's a fun place, and a good way to keep up to date on all the new tech that us developer folk seem to need to know about. But it also leaves a fella feeling like he really needs to keep up with all this stuff. I mean, if you don't have a side project using the latest client-side framework, well, good luck ever finding a job again in this industry.
Thinking about it, I find that I straddle the line on this. As a long-time contractor, I try to stay up to date on the New Shiny and will happily run with whatever flavor of the month language, framework, and programming paradigm that a given gig wants. Yeah, sure, Node.js with tons of functional stuff mixed in pulling from a NoSQL store and React on the front end. I'm your guy. We're gonna change the world!
But for
my
own
stuff,
there's no way I'd use any of that crap. Good old C#, SQL Server and a proper boring stack and tool set that I know won't just up and fall over on a Saturday morning and leave me debugging NPM dependencies all weekend instead of bouldering in the forest with the kids. This stuff is my proper income stream, and the most important thing is that it works. If that means I have to write a "for" loop and declare variables and risk 19 year old kids snooting down at my code, so be it.
I can't tell you how nice it is to have software in production on a boring stack. It gives you freedom to do other things.
I can (and often do) go entire months without touching the codebase of my main rent-paying products. It means I can, among other things, pick up a full-time development gig to sock away some extra runway, take off and go backpacking around the world, or better still, build yet another rent-paying product without having to spend a significant amount of time keeping the old stuff alive.
It seems like on a lot of stacks, keeping the server alive, patched and serving webpages is a part-time job in itself. In my world, that's Windows Update's job. Big New Releases come and go, but they're all 100% backwards compatible, so when you get around to upgrading it's just a few minutes of point and clicking with nothing broken.
I see it as analogous to Compound Interest, but to productivity. The less effort you need to spend on maintenance, the more pace you can keep going forward.
But yeah, the key is to never get so far down in to that comfy hole that you can't hop back into the present day when it's time to talk shop with the cool kids. Shine on, flavor of the week!
by
Jason Kester
Discuss on hacker news
I run a little single player software empire, and right now the product that’s paying my bills is
S3stat. It’s a SaaS product that processes human readable reports from the basic logfiles that Amazon produces from its Cloudfront and S3 services. It’s kinda like Google Analytics, but for your Cloud stuff. You should totally sign up.
But anyway, the way it works is that you sign up for a Trial account with us and use a little installable tool we provide that lists out your S3 Buckets and Cloudfront Endpoints and walks you through setting up logging for the ones you’re interested in. Then it tells us where to find those logs so we can produce reports. Or at least that’s the way it originally worked.
Over the last few years though, more and more new users started coming on board with logging already running on their stuff. The tool will detect this, of course, and not screw everything up by changing your settings. It’ll just note the log location and send it along as per usual.
Every once in awhile I’d get an email from somebody with a few months of old logfiles sitting around asking if we (because we’re a company and therefore a “we” even though it’s just me) could generate reports for them. Sure, no worries, we’d say. And I’d kick off a job to run those old reports. Happy new customer.
But I’m a nerd. One of those lazy ones who likes to automate things, and even though this only took a few minutes out of my day, I’d much prefer to keep those minutes for things like blowing off work for the day to go rock climbing because it’s sunny and I can do that because I run my own company. The less time I have to spend dealing with these customers, the better. So I built a little sniffer that would detect pre-existing logfiles and gave the users an option of hitting the “go” button on the report runner themselves.
But I’m also a capitalist. So I didn’t push that out just yet.
Instead, I changed it to automatically run
just one month worth of logs, so that new users could get a nice big taste of what they could expect from the service. Then I offered the option to
purchase additional months of service so that they could process any older logs. And I priced those additional months of service at 100% the cost of normal S3stat service. All you’re doing is moving back the start date for your subscription.
And just as it has surprised me every other time I’ve asked my customers for more money for things, people actually started
buying those extra months of service.
Nice!
But it gets better.
Over time, I’ve noticed that the average purchase for extra service seems to be climbing. People are showing up with more and more logs sitting around that they want to have processed. And why has this started happening? Because of this:
That’s what you see when you go to create a new S3 Bucket in the AWS Console these days. Notice how the
Next Step after naming the thing is to Set Up Logging. Gee, that sounds important, if Amazon is telling me to do it. And sure enough most people seem to do so when creating new Buckets. Cloudfront does something similar when creating a new Distribution, where it throws Logging right in your face before you can move forward.
So I’m sure you’ve made the connection back to my service, but I’ll go ahead and spell it out. Amazon’s new default way of setting up S3 and Cloudfront prompts you to start logging immediately. That means that when and if you do eventually decide to try S3stat, you’ll show up with a bucket full o’ pre-existing logfiles for us to process. And
that means that you are pretty likely to decide to move your start date with us all the way back to the day you set up your stuff in the first place, giving us an extra several months worth of lifetime value from you right off the bat.
Cool, eh?
Big companies have a way of accidentally stepping on little companies as they move around sometimes, crushing a thriving business with a minor change to their Terms of Service or a decision to stop publishing a certain data feed. But every once in awhile, one of those seemingly random little steps manages to squish up the dirt under one of us little guys, leaving us better off than we were before.
It’s nice to be on the receiving end of that for a change.
by
Jason Kester
Discuss on hacker news
I kinda stumbled into this idea a few months back. I had added a feature on
S3stat where Users can create extra logins for other members of their team. So instead of printing out reports and sending them to their boss, they can simply add Mr. Boss Guy to their team and he can check his own reports. Less work for the account user.
But it turns out that it was a better idea than I'd thought. Because if Joe User let his trial lapse, he'd get an email the next week from Mr. Boss Guy asking where the hell this week's reports were and why couldn't he log in to S3stat anymore? Joe would explain that gee, that thing costs like
ten dollars a month and I don't have a budget for software. And Mr. B would yell at him for 24 seconds then explain about how he had just this minute wasted $10 of his own time yelling at somebody because
$10 is not a lot of money for a business and go subscribe right now. Here's the company card.
Well that was cool. I wonder how many times that has happened?
So I wrote a big ugly SQL query to count how many Users had signed up for trials since that feature went live, how many of them converted to paid, and to see whether Trial Users who added at least one team member converted at a higher rate than users who didn't.
And they did. By a huge margin:
Well wow. I wonder what other features I have (or could build) that can make that sort of difference. And I wonder what sort of tools I could build to get a better handle on this sort of thing. Because this is not the sort of thing I want to have to find out by accident ever again.
Here's an equation:
(let ((g (* 2 (or (gethash word good) 0)))
(b (or (gethash word bad) 0)))
(unless (< (+ g b) 5)
(max .01
(min .99 (float (/ (min 1 (/ b nbad))
(+ (min 1 (/ g ngood))
(min 1 (/ b nbad)))))))))
If it looks familiar, that's not a surprise since I lifted it verbatim from an article Paul Graham wrote way back in 2003 called
A Plan For Spam. In it, he basically invented the Bayesian Spam Filter, which is the reason that email is still usable today even though roughly 99% of the things headed toward your Gmail inbox are in fact spam.
At its heart, the algorithm is just a way of classifying piles of information into Good and Bad groups. For email, you look at the words in a message and determine whether an email with those words is likely to be Spam (which is Bad) or not (which is Good). There's nothing about the algorithm specific to the problem of Email. It just happens to be good at classifying stuff, and email benefits greatly from being classified.
But Users (of your SaaS product, remember) can be classified too. Over the course of a trial, they do lots of things and give off all sorts of signals that you can collect and analyze. You can feed those signals into a Bayesian Classifier to train it about what Good (subscription activating) users tend to do, and what Bad (trial expiring, canceling) users tend to do. And you can feed the signals from a new Trial User in to that trained Classifier to produce a guess about what he might do in the future.
So in the case above, where we're looking at Users who have invited Team Members vs. those who haven't, we can now just define an "AddedTeam" Signal that we track for that user and the Classifier will take care of determining whether that is in fact statistically more likely to indicate that our User will convert to a paid subscriber.
And that's good.
Like measurable-in-dollars good.I've been experimenting with other things you can track, such as aggregates like "NoLoginLast7Days" and rollups like "10LoginsLast30Days" that might give more for the classifier to grab on to. And, of course, just watching the data as it rolls past is enlightening too. You learn a lot when you start collecting usage data (in human readable form), like just how many people are forgetting their password every time they log in, and that there are customers who do in fact check in several times a day as part of their workflow. Even without the crazy Bayesian math, you can get a good feel for which people are planning to buy the thing when their trial runs out.
I'm going to write more on this as I build it out, but I guess this is a good time for a heads up that I'm hoping to build this in to a product at some point. It's live, in its infancy, at
unwaffle.com. If this stuff sounds like something you'd use for your own SaaS, hit me up with an email or go sign up for an invite.
by
Jason Kester
Discuss on hacker news
There's something rather romantic about the notion of living on a tropical beach and working away on your laptop. And you know what? It's one of those few instances where the real version is actually just as awesome as the romantic ideal.
In fact, you need to quit thinking about it as a romantic notion. It's something you can do. And it's high time you did it.
Here's how.
Have you noticed how much easier it is to run a software company than pretty much any other type of business? You don’t need office space, a retail storefront, telephones, employees, or even an address. Other businesses need all those things.
We get to run the whole thing on a little server stuffed into a data center someplace. probably someplace we’ve never even visited for all we know because the only indication we have is that the machine has “us-east” in its name. Is there any reason that we need also be in that half of the US?
In practice, we can just as easily be in “Thailand-south” and nobody would ever know the difference.
Would anybody know the difference?
Real Companies have phone lines with people who answer them. We have Google Voice patched into Twilio patched into some funky 3rd world carrier who, incidentally, does a much better job of ensuring that your phone will ring on the beach than AT&T ever did drilling through the walls of my apartment in Portland.
Real Companies have mailing addresses and Business Bank Accounts. So do we. We set that bank account up before we left, using the business address, which by coincidence happens to be the same as that of our parents’ house or, if we were getting all fancy, an office-in-a-box in Deleware or Nevada.
Am I allowed to work there?
No, not really.
But they’re not checking. Places like Southeast Asia are chock full of expats living there, doing silly things like “visa runs” to the next country and back every couple months to remain a tourist for years on end.
Even in Europe, where you wouldn’t be able to get a self-employment working visa (if such a thing existed) with less than six months effort and a good lawyer, you’ll find that they’re really a lot more interested in keeping the various people coming in from the South from doing so than they are in messing with you. Given the level of effort required to find somebody at the Santander ferry port to even stamp your passport, it’s unlikely that anybody is conducting a multi-month surveillance of your AirBnB “office” up in the hills above the Cote de Azur.
What about taxes?
Here, you’re in luck. The IRS, being awesome, actually encourages you to piss off to parts unknown with its Foreign Earned Income Exclusion. Basically, if you can prove you’re a Bona Fide Resident of another Country (which you can’t) or that you’ve been Physically Present outside the US for at least 330 days during the last year (which you can), then they’ll let you deduct nearly $100,000 from your income before taxes.
Naturally, it’s never as good as it seems, but even after they’ve clawed back your self-employment taxes and a few other things, you still get to write off a nice chunk of change from your taxes. Certainly enough to pay for your room & board in a place like Southeast Asia.
Can you really work from the beach?
Of course. It’s the Future. The Internet is everywhere. Step One is to find the most pleasant, most remote, cheapest, most Swedish girl having beach available (prefereably with good rock climbing and/or surfing). Step Two is to find a thatch-roofed bar on said beach with cheap beer and a good view of the sunset. And we’re done. They’ll have wifi.
Now we start a radial search outward until we find a pleasant little bungalow with monthly rates. $400/month will buy you a lot of developing-world-luxury in this day and age. That’s even less than it would cost for a 1BR apartment in San Francisco, I’m told, and it comes with Utilities paid.
How long until I run out of money?
An alternate plan where you get to keep your day job...
You don't even need to go out on a limb to pull this off these days.
Plenty of companies offer remote work as an option, and few will actually go so far as to
specify just how remote you can get. Nicaragua is on Dallas time, and yes, they have internet there.
They also have nights and weekends, so if you think you can bootstrap your idea up here, there's no
reason to expect you won't be able to down there.
You won’t.
The unexpected reality is that living out of a backpack in most of the sunnier bits of the world is cheaper by far than living in a city in the US. Even on a couch. With roommates.
You’ll spend your time building a business. A real one that charges people money in exchange for stuff and hits profitability fast. You’ll fight for a while and start bringing in a few hundred bucks a month from paying customers. Beach life will be paid for at this point and now the goal shifts to building the revenues and getting to “I can live on this money”, then “day job replacing money” and hopefully one day “I can retire off this money”.
And you can do it all from that beach.
Sorted.
I’ve actually done all of this. Chances are I’m not the first person you’ve met who has. It’s not in any way difficult except in that it represents a Big Change.
Figure out how to convince yourself that it’s going to work. Then save up $10k in case it doesn’t. Then book that flight.
Can’t ask fairer than that. Good luck!
by
Jason Kester
Discuss on hacker news
You should probably be working remotely.
Seriously. Take a quick look around you right now. Do you appear to be sitting in a felt cube? Does the word "software" appear in your job description? Cool. You're in a position to make your life a whole lot better.
I've been working 100% remotely for the better part of ten years now.
The short answer for why? Because I can.
That's really the single greatest feature of being a developer today. You can do your thing from pretty much anywhere in the world with no reduction in throughput.
I can (and have) set up shop for the winter on some remote Central American surf break. I can (and have) moved my main residence to a small village in the French countryside where the quality of life is good and there's enough bouldering to last me a lifetime of afternoons off. I can (and have) simply packed my whole development world onto a 12" Thinkpad and headed off on the road for an entire year.
And all those places have wifi. And I can work there. So I do.
So even if I found a company that did happen to have an office right next to that perfect left reef pass off the coast of Sumatra, I probably still wouldn't want to commit myself to working there full time. I already have an office there. As well as everywhere else I'd like to be.
It didn't used to be like this. And it still isn't for most professions. But it absolutely is for software. As a developer, I think you'd be crazy to pass up on it.
So yeah, that's why.
And here's the thing, in case you missed it above.
You can do this too. The industry is waking up to the fact that remote working works. There are tons of companies hiring remote workers right now. Enough so that it doesn't even make sense for me to list any of them here.
So yeah, get on it today. Find a way out of that cubicle, and at the very least onto your kitchen table. You can sort out the whole laptop & beach thing later. But the first step is to acknowledge that we're living in the future, and start doing so.
Good luck!
by
Jason Kester
Discuss on hacker news
Common knowledge: You need to do A/B testing on your site.
Why? Because it will make you more money.
Cool, but why? Because if your business sells things on the web or otherwise makes you money as a result of people doing stuff on your website, you want to maximize the percentage of people doing stuff. That's where A/B testing comes in.
What is it?
Basically, if you
test one version of your website against another version, you can
measure which one better compels users to do something.
So, for example, you might try testing your normal "Buy Now" button against a double-sized, bright red shiny button. You'd show one version to half your visitors, the other version to the other half. Pay attention to who saw which version, and whether they actually bought your thing.
After a week or so collecting data, you can see that, for example, 4.8% of visitors seeing your old button clicked it, whereas 6.6% of those who saw the big red one clicked it. That's valuable. As in,
measurable in dollars valuable, so you need to be doing it.
How to do it
To do A/B testing right, you need to run it from the web server. There are tools out there that give you javascript code that Marketing will try to convince you to paste into your site, but don't do it. It would take an entire article just to explain all the ways that's a bad idea.
It's not too hard to code something up yourself, but there are some good libraries out there that you can simply drop in. I'm writing this post to tell you about the one I wrote for ASP.NET and ASP.NET MVC:
FairlyCertain - A/B Split Testing for ASP.NET
I've been using this for the past 6 months for
S3stat and
FairTutor, with some pretty impressive results. Now that it's good and stable, I finally motivated myself to package it up and release it as Open Source.
It's essentially a simplified version of Rails'
A/Bingo, with one major departure in that participation data is stored in the users' browser rather than a local database. That helps it scale out better and means that you can pretty much just drop the code into your project and have it start working without having to configure anything.
Check it out and let me know if you find it useful.
FairTutor is our latest project here at Expat. It's a website that connects Spanish teachers in South America with students in the US and lets them hold
live Spanish classes online.
We'll be starting Beta classes soon, so if you want to score some free Spanish lessons, you might want to go
sign up for the waiting list!
by
Jason Kester
Discuss on hacker news
We're happy to announce that S3stat now offers support for CloudFront Streaming distributions. We've offered
S3 and CloudFront Analytics for quite a while, so it was an easy decision to extend the service to include Streaming .
Basically, we'll handle all the setup and configuration needed to get Logging enabled on your CloudFront distribution, and each night we'll download and process those logfiles, and deliver reports back to your S3 bucket.
Web Stats for Cloudfront & Amazon S3
This feature has been out of Beta for about a week, so go ahead and give it a try when you get a chance. I'd love to hear your feedback.
by
Jason Kester
Discuss on hacker news
I remember the day I got my first Spam post at
Blogabond, back in 2005. It was actually kind of flattering, since the site had only been live for a few months. I deleted it by hand and moved on.
Things have progressed substantially since then. Automated Spam Bots gave way to armies of cheap workers posting by hand, and now we've reached a point where roughly 90% of new blog entries on the site are attempted spam. The sheer volume of posts coming in is enough to sneak some of them past the
Bayesian Filtering we have in place, so we're lucky to have some extra measures in place to make sure that the general public never sees any spam on Blogabond.
I've learned a lot about Blog Spam over the years, so I thought I'd share some advice for anybody building their own user-generated-content site. Presuming, of course, that you don't want to be overrun with spam.
Collect Everything
Never throw spam away. It's valuable. You need tons of spam to train your Bayesian filters, and you need to use real spam from your own site to get the filtering results you want. Our filters, for example, can differentiate between a post written by a
backpacker traveling through Guatemala and a resort offering package vacations there.
Mark posts as spam and ensure that nobody can see them, but keep them around. They're handy!
Classify your Users
At Blogabond, we have the concept of a "Trusted User", whose posts we're comfortable showing on our front page, in RSS feeds, sitemaps, location searches, etc. The only way to become Trusted is to have a moderator flip you there by hand after reading enough of your posts. Everybody else is either a Known Spammer or simply Unknown.
These classifications are the main reason that the average person will never see any spam on Blogabond. All publicly browsable content is from Trusted Users, so the only way to see something from an Unknown user is to go to the URL directly. That means that you can start a new blog today and send out a link that people can use to see what you've written, but until you've convinced us you're trustworthy we're not going to let people off the street stumble across your stuff.
Never Give Feedback
The last thing you want to tell a Spammer is that his post was rejected as spam. Never tell him that his account has been disabled. Let him figure these things out on his own, hopefully after a lot of wasted time and effort.
Pages with spam content return a 404 (Not Found) to anybody accessing it from outside the author's IP block. That way, the author can (mistakenly) verify that it's live, while the rest of the world and Google never get to see it.
Never Show Untrusted Content to Google
The whole point of blog spam is SEO. Once Google gets ahold of a post, the game is over and the spammer has won. The worst thing you can do is blindly trust your spam filters to keep spam off your site and out of Google's index.
Assuming you're categorizing your users, this is simple. If it's from a Trusted User, it goes to places that Google can see it. If not, it doesn't. Sorted.
Maximize Collateral Damage
Stack the deck so that every action a Spammer takes increases the odds that he'll undo all his previous work.
When we flag something as spam, we also go back and flag everything in the past that came from that User and from his IP Address Block (as well as poisoning that IPBlock and User in the future). So while he may get lucky and sneak a post through the filter on his first try, chances he'll end up retroactively flagging that post as spam if he presses his luck.
We can actually watch as new messages drop onto the "Maybe Ham" pile, then mysteriously disappear a few minutes later. In essence, the spammer is cleaning up his own mess.
Automate Everything
You're going to get a lot of spam, so you need tools to make it really easy to moderate it if you want to stay happy. Our Spam Dashboard has a view showing snippets from every recent post that lets us flag an item with a single click (in a speedy, AJAX fashion). I'll spend maybe a minute a day running down that list turning Maybe's into Spam, and occasionally marking a new user as Trusted.
We also have a pretty view of everything that's been marked as spam recently, along with reasons why and daily stats to see how well we're doing:
That's a screenshot from our Spam Dashboard this morning. As you can see, we're doing pretty well.
GREEN items are ones recently caught by the filter,
RED items are attempts by a Known Spammer to post something, and items that have been retroactively flagged (from the spammer pushing his luck too far) are shown in
BLUE.
PURPLE items (none shown) are ones that we had to flag by hand because they made it past the filter.
In this shot, you can see a busy spammer creating new accounts, posting enough blog entries to trip the filter and undo all his efforts, then creating a new account and trying again.
Filter Ruthlessly
There are two categories of people using your site: Real Users and Spammers. When you first start out, you tend to see it less as two distinct groups and more as a broad spectrum with some people falling in between. The longer you run a site, the more you come to realize that no, there are no Real Users with "good intentions" who are mistakenly posting commercial links on your site. Those people are spammers.
So don't hesitate to flag anything that looks even a little bit fishy. Woman talking about her fabulous Caribbean Cruise out of the blue? Spam. Random person posting poetry in China? Spam. Guy from India who really wants to tell you about his hometown? Spam.
And how do you know you were right? Because you will never hear complaints from any of those people. We've labeled thousands and thousands of "bloggers" as Spammers over the years, and so far I've heard back from exactly one of them. Spammers know that what they're doing is Bad Behavior. When you shut down their account, they'll know why.
Make the Spammers feel successful
Spammers will put in a surprising amount of effort to get their posts past your spam filter. The harder you fight back, the harder they'll try. Once they've found something that works, however, they'll sit back and watch the posts flow. That's the place you want them, happily sending post after post into your Spam corpus and training your Bayesian filters.
A happy spammer is a spammer who's not going to spend any more time trying to work your system. A happy spammer is reporting success to his boss and costing the bad guys money. A happy spammer is constantly teaching your filter about new trends in the spam world so that it can do its job better.
You want to cultivate a community of happy spammers on your site.
by
Jason Kester
Discuss on hacker news
I wrote an
article last week describing ASP.NET's Internationalization (i18n) scheme in less than favorable terms, and it occurs to me that I should probably offer up a proper justification if I'm going to start throwing terms like 'Hopelessly Broken' around.
As several members of the ASP.NET community
so eloquently pointed out in response to that article, ASP.NET does in fact offer a way to translate web sites from one language to another, and it does indeed work perfectly fine, thank you very much. That fact, I omitted to mention last week, is not in dispute and I apologize for implying as much.
To clarify, I don't mean to say that ASP.NET i18n is Hopelessly Broken to the point where it's not possible to do it, but rather that ASP.NET handles i18n in a fashion that is demonstrably worse than the accepted industry standard way of doing things which, incidentally, pre-dates ASP.NET.
Here's why.
First, let me give a quick rundown on the industry standard way of localizing websites:
gettext. It's a set of tools from the GNU folks that can be used to translate text in computer programs. The ever-humble GNU crowd have a
lot of documentation you can read about these tools explaining why they're so well suited for i18n and how they're a milestone in the history of computer science and incidentally how much smarter the GNU folks are than, say, you. And why you should be using emacs.
But anyway, to demonstrate why the gettext way of doing things makes so much more sense than the Microsoft way, let me run down a short list of the things you need to do to translate a website. For each task, I'll give an indication of how ASP.NET would have you do it, along with how you'd do it using hacky fixes I've put in place for the
FairlyLocal library I discussed at length last week. Also, if there's a difference, I'll talk briefly about how "Everybody Else" (meaning gettext, which is in fact used by Everybody Else in the world to localize text) does it.
Identifying strings that should be marked for translation
ASP.NET: Find them by hand
FairlyLocal: Find them by hand
Everybody Else: Find them by hand, (unless you're using a language that supports the emacs gettext commands for finding text and wrapping them automatically)
Marking text for translation in code
ASP.NET: Ensure that they're wrapped in some form of runat="server" control
FairlyLocal: Wrap with _()
Everybody Else: Wrap with _()
ASP.NET actually does offer one advantage here, in that many of the text messages in need of translation will already be surrounded by a runat="server" control of some description. Unfortunately, that advantage is compensated for by the sheer amount of typing (or copy/pasting or Regex Replacing) involved in surrounding all the static text in your application with "<asp:literal runat="server"></asp:literal>", and by the computational overhead involved in instantiating Control objects for every one of those text fragments.
Everybody Else gets to suffer through the steady-state habit of surrounding all their text with _(""), or with a long copy/paste or Regex Replace session similar to the ASP.NET experience. It's still not all that much fun, but at least it's less typing.
Compiling a list of text fragments for use in translation
ASP.NET: Pull up each file in Design View, right click and select Create Local Resources
FairlyLocal: Build the project (thus running xgettext automatically)
Everybody Else: run xgettext
ASP.NET uses a proprietary XML file format called .resx, which is incomprehensible to humans in its raw form, but has an editor in Visual Studio.NET. Everybody Else uses .po files, which is a text format that's simple enough to be read and edited by non-technical translators, but there are also a variety of good standalone editors available.
Updating that list of text fragments as code changes
ASP.NET: Pull up each file in Design View (again), right click and select Create Local Resources (again)
FairlyLocal: Build the project (thus running xgettext automatically (again))
Everybody Else: run xgettext again
Specifying languages for translation:
ASP.NET: Copy the .resx file for each page on your site to a language-specific version, such as .es-ES.resx.
FairlyLocal and
Everybody Else: create a language-specific folder under /locale and copy a single .po file there.
Surely there must be a tool to copy and rename the hundreds of locale-specific .resx files that ASP.NET needs for every single language, but I haven't found it yet. Please ASP.NET camp, point me in the right direction here so I don't need to go off on a rant about this one…
Translating strings from one language to another
ASP.NET: Translator opens the project in Visual Studio.NET (seriously!) so that he can use the .resx editor there to edit the cryptic XML files containing the text.
FairlyLocal &
Everybody Else: Give your translator a .po file and have him edit it as text or with a 3rd party tool such as
POedit
Identifying the language preference of the end user
Everybody: Automatically happens behind the scenes, but you can specify language preference too.
Referencing Translated Text (by using):
ASP.NET: Uniquely named Resource Keys
FairlyLocal: The text itself
Everybody Else: The text itself
When Visual Studio.NET does its magic, every runat="server" control will get a new attribute called meta:resourceKey containing a unique key with a helpful name such as "Literal26" or "HyperLink7" that is used to relate the text in the .resx file back to the control that uses it.
This is not actually as unhelpful as it seems, since translators will still see the Original Text in the .resx file alongside that meaningless key, so they will in fact know what text they're translating. Just not its context. Further, as ASP.NET developers we've learned to put up with a certain amount of VS.NET's autogenerated metagarbage, so we can generally gloss over these strange XML attributes that suddenly appear in our source.
Everybody else simply uses the text itself as the lookup key.
Displaying text to the end user in his preferred language
ASP.NET: Automagic. Can also ask for text directly from AppLocalResources
FairlyLocal: Automagic. Can also ask for translated text directly.
Everybody Else: Automagic. Can also ask for translated text directly.
In ASP.NET, you can add keys to your .resx file by hand if there are any messages you need that didn't get sniffed from the source. Other technologies don't need to bother with this step as often, since any text appearing in the source code will be marked for translation, whether it's associated with a control or not.
Wrapping Up
A short interlude...
I'm a believer in
Sturgeon's Law, which states that "
90% of everything is crap." Even ASP.NET, which I feel is still miles ahead of every other web development framework is not immune.
We've learned to avoid using pretty much all of the "Rich" controls and Designer Mode garbage that shipped with 1.1 and has plagued .NET ever since, and every new release brings a few things with it (including, alas, System.Globalization) that are best avoided.
In my opinion, that's fine, since the rest of the framework is so ridiculously productive. Don't worry though, any honest Django or Rails veteran will tell you that their frameworks also have bits that are best left alone. And hey, the
most popular platform in the world for building web apps is 100% crap, so we're still miles ahead of the game here in the land of MS.
Anybody still following along will notice that while ASP.NET offers workable solutions to every stage of the i18n process, it's generally not quite as straightforward or convenient as the alternative way of doing things. ASP.NET also tends to pollute your codebase with a lot of extraneous noise in the form of meta:resourceKey attributes (why couldn't they have at least shortened that to "key" and made it part of the Control class so you could easily add it to anything) and .resx file collections for every single page in your site, and it leaves you a little short in the Tools department when it comes time to translate those files.
So while it's certainly possible to localize a website the way that ASP.NET recommends, it is definitely a lot of work, and it tends to be quite confusing. Doing it in another technology, say Django for instance, just doesn't seem like that big a deal. That's the sort of experience that I'm trying to bring to ASP.NET with the
FairlyLocal library, and I hope it's at least a good first step.
If you have any suggestions (or better still, code contributions) to make it better, I look forward to hearing from you.
by
Jason Kester
Discuss on hacker news
I've been building websites with ASP.NET for a little over 10 years now, and I have a dirty little secret to confess: I've never Internationalized a single one of them.
It's not from lack of trying, I can tell you. I've got a good dozen false starts under my belt, and plenty of hours spent studying the code from other people's sites that implement Internationalization (abbreviated as i18n for us lazy typists) the way that Microsoft wants you to do it. And my conclusion is that it's just plain not worth the effort.
I18n is hopelessly broken in ASP.NET. Let's look at this nice snippet of sample code to see why:
<!-- STEP ONE, in MyPage.aspx: Create Runat="Server" Literal Control: -->
<asp:Literal ID="lblPages"
runat="server"
meta:resourcekey="lblPagesResource1"
Text="Pages"/>
<!-- STEP TWO, in MyPage.es-ES.resx: Create Message Key/Value: -->
<data name="lblPagesResource1.Text" xml:space="preserve">
<value>Browse</value>
</data>
...and that's for EVERY piece of text in your whole site!
Notice that you need to make every single piece of localized text into a runat="server" control. And that you then need to add this crazy long attribute (that Intellisense doesn't know about, so you have to type out in full) to each one of those controls so that ASP.NET can find them in one of the Resource files that you need to generate
by hand for every text fragment in your entire website.
If it sounds like a ridiculous amount of work for your developers, you're probably being charitable. In practice, it's
so much extra work that nobody actually does it. That, my friends, is the reason you hardly ever see any multi-language websites written with ASP.NET.
Recently, however, my hand was truly forced. We're getting pretty close to launching
FairTutor to the public, and since it has target audiences in both the United States and Latin America it pretty much needs to work in Spanish as well as English. This is the part where I start wistfully looking back to a couple Django projects we did not too long ago, and the absolute breeze it was localizing those sites. If only the rest of Django wasn't so crap, we could just port this project across and… Hang on a sec. Port. Yeah, how about we simply port that amazing Django i18n stuff over to ASP.NET instead.
That was a week ago.
Today, I'm releasing some code that I hope will single-handedly fix i18n in ASP.NET. It's based on the way that everybody else does it. Let's pause a minute to let that sink in, since many of my fellow .NET devs might not have been aware of this fact: There's another way of doing i18n, and it's so simple and straightforward that
every other web framework uses it in some form or another to do multi-language websites.
In Django, PHP, Java, Rails, and pretty much everything else out there, you simply call a function called gettext() to localize text. Usually, you alias that function to _(), so you're looking at like 5 keystrokes (including quotes) to mark a piece of text for internationalization. That's simple enough that even lazy developers like me can be convinced to do it.
Better still, frameworks that use this gettext() library (it's actually a chunk of open source code from the GNU folks), also tend to come with a program that will sift through your source and automagically generate translation files for you (in .PO format, which is basic enough to be edited in notepad by non-tech-savvy translators, but is popular enough that there are several existing editors built just for it), containing every text fragment that was marked for i18n.
The whole process is so simple and straightforward that you're left to wonder why Microsoft felt compelled to spend so much time and effort reinventing it all to be worse.
Introducing FairlyLocal
I really want ASP.NET to stop forcing people to monkey with XML files and jump through hoops just to show web pages in Spanish, so I'm going to package up all this code and release it as Open Source:
FairlyLocal - Gettext Internationalization for ASP.NET
At the moment, there's not a whole lot to it. It'll find where you're using the FairlyLocal.GetText() (or its _() alias) and generate .PO files for you. And it'll suck in various language versions of those files and translate text on your website. Not much there, eh? But then that's the whole point: i18n is supposed to be simple and straightforward. Hopefully, FairlyLocal will make that an actuality for the ASP.NET community.
I look forward to hearing your feedback.
FairTutor is our latest project here at Expat. It's a website that connects Spanish teachers in South America with students in the US and lets them hold
live online Spanish lessons.
We'll be starting Beta classes soon, so if you want to score some free Spanish lessons, you might want to go
sign up for the waiting list!
by
Jason Kester
Discuss on hacker news