The Daily Parker

Politics, Weather, Photography, and the Dog

Sleep in on the weekends if you can

A Swedish psychologist has preliminary data that suggest sleeping in on the weekends can make up for some sleep loss during the week, maybe:

Sleeping in on a day off feels marvelous, especially for those of us who don't get nearly enough rest during the workweek. But are the extra weekend winks worth it? It's a question that psychologist Torbjorn Akerstedt, director of the Stress Research Institute at Stockholm University, and his colleagues tried to answer in a study published Wednesday in the Journal of Sleep Research.

Akerstedt and his colleagues grouped the 38,000 Swedes by self-reports of sleep duration. Short sleepers slept for less than five hours per night. Medium sleepers slept the typical seven hours. Long sleepers, per the new study, snoozed for nine or more hours.

The researchers further divided the groups by pairing their weekday and weekend habits. Short-short sleepers got less than five hours a night all week long. They had increased mortality rates. Long-long sleepers slept nine or more hours every night. They too had increased mortality rates.

The short-medium sleepers, on the other hand, slept less than five hours on weeknights but seven or eight hours on days off. Their mortality rates were not different from the average.

Personally, getting 9 hours seems like a luxury. But I haven't been getting 7 enough lately. I have a dream that someday I will have a full week of 7+ hour nights again. I last had this happen in January.

It was 20 years ago today

On 13 May 1998, just past midnight New York time, I posted my first joke on my brand-new website from my apartment in Brooklyn.

My first website of any kind was a page of links I maintained for myself starting in April 1997. Throughout 1997 and into 1998 I gradually experimented with Active Server Pages, the new hotness, and built out some rudimentary weather features. That site launched on 19 August 1997.

By early April 1998, I had a news feed, photos, and some other features. On April 2nd, I reserved the domain name Then on May 6th, I launched a redesign that filled out our giant 1024 x 768 CRT displays. Here's what it looked like; please don't vomit:

On May 13th, 20 years ago today, I added a Jokes section. That's when I started posting things for the general public, not just for myself, which made the site a proto-blog. That's the milestone this post is commemorating.

Shortly after that, I changed the name to "The Write Site," which lasted until early 2000.

In 1999, Katie Toner redesigned the site. The earliest Wayback Machine image shows how it looked after that. Except for the screenshot above, I have no records of how the site looked prior to Katie's redesign, and no easy way of recreating it from old source code.

I didn't call it a "blog" until November 2005. But on the original site, I posted jokes, thoughts, news, my aviation log, and other bits of debris somewhat regularly. What else was it, really?

Today, The Daily Parker has 6,209 posts in dozens of categories. Will it go 20 more years? It might. Stick around.


This month will see two important Daily Parker milestones. This is the first one: the 6,000th post since launched as a pure blog in November 2005. The 5,000th post was back in March 2016, and the 4,000th in March 2014, so I'm trucking along at just about 500 a year, as this chart shows:

Almost exactly four years ago I predicted the 6,000th post would go up in September. I'm glad the rate has picked up a bit. (New predictions: 7,000 in May 2020 and 10,000 in April 2026.)

Once again, thanks for reading. And keep your eyes peeled for another significant Daily Parker milestone in a little less than two weeks.

List of 2018 A-to-Z topics

Blogging A to ZHere's the complete list of topics in the Daily Parker's 2018 Blogging A-to-Z challenge on the theme "Programming in C#":

Generally I posted all of them at noon UTC (7am Chicago time) on the proper day, except for the ones with stars. (April was a busy month.)

I hope you've enjoyed this series. I've already got topic ideas for next year. And next month the blog will hit two huge milestones, so stay tuned.

Z is for Zero

Blogging A to ZToday is the last day of the 2018 Blogging A-to-Z challenge. Today's topic: Nothing. Zero. Nada. Zilch. Null.

The concept of "zero" only made it into Western mathematics just a few centuries ago, and still has yet to make it into many developers' brains. The problem arises in particular when dealing with arrays, and unexpected nulls.

In C#, arrays are zero-based. An array's first element appears at position 0:

var things = new[] { 1, 2, 3, 4, 5 };

// -> 2

This causes no end of headaches for new developers who expect that, because the array above has a length of 5, its last element is #5. But doing this:


...throws an IndexOutOfRange exception.

You get a similar problem when you try to read a string, because if you recall, strings are basically just arrays of characters:

var word = "12345";

// 5


// IndexOutOfRange exception

The funny thing is, both the array things and the string word have a length of 5.

The other bugaboo is null. Null means nothing. It is the absence of anything. It equals nothing, not even itself (though this, alas, is not always true).

Reference types can be null, and value types cannot. That's because value types always have to have a value, while reference types can simply be a reference to nothing. That said, the Nullable<T> structure gives value types a way into the nulliverse that even comes with its own cool syntax:

int? q = null;
int r = 0;
Console.WriteLine(q ?? 0 + r);
// 0

(What I love about this "struct?" syntax is you can almost hear it in a Scooby Doo voice, can't you?)

Line 1 defines a nullable System.Int32 as null. Line 2 defines a bog-standard Int32 equal to zero. If you try to add them, you get a NullReference exception. So line 3 shows the coalescing operator that basically contracts both of these statements into a succinct little fragment:

// Long form:
int result;
if (q.HasValue)
	result = q.Value + r;
	result = 0 + r;

// Shorter form:
int result = (q.HasValue ? q.Value : 0) + r;

// Shortest form:
int result = q ?? 0 + r;

And so the Daily Parker concludes the 2018 Blogging A-to-Z challenge with an entire post about nothing. I hope you've enjoyed the posts this month. Later this morning, I'll post the complete list of topics as a permanent page. Let me know what you think in the comments. It's been a fun challenge.

Y is for Y2K (and other date/time problems)

Blogging A to ZI should have posted day 25 of the Blogging A-to-Z challenge. yesterday, but life happened, as it has a lot this month. I'm looking forward to June when I might not have the over-scheduling I've experienced since mid-March. We'll see.

So it's appropriate that today's topic involves one of the things most programmers get wrong: dates and times. And we can start 20 years ago when the world was young...

A serious problem loomed in the software world in the late 1990s: programmers, starting as far back as the 1950s, had used 2-digit fields to represent the year portion of dates. As I mentioned Friday, it's important to remember that memory, communications, and storage cost a lot more than programmer time until the last 15 years or so. A 2-digit year field makes a lot of sense in 1960, or even 1980, because it saves lots of money, and why on earth would people still use this software 20 or 30 years from now?

You can see (or remember) what happened: the year 2000. If today is 991231 and tomorrow is 000101, what does that do to your date math?

It turns out, not a lot, because programmers generally planned for it way more effectively than non-technical folks realized. On the night of 31 December 1999, I was in a data center at a brokerage in New York, not doing anything. Because we had fixed all the potential problems already.

But as I said, dates and times are hard. Start with times: 24 hours, 60 minutes, 60 seconds...that's not fun. And then there's the calendar: 12 months, 52 weeks, 365 (or 366) days...also not fun.

It becomes pretty obvious even to novice programmers who think about the problem that days are the best unit to represent time in most human-scale cases. (Scientists, however, prefer seconds.) I mentioned on day 8 that I used Julian day numbers very, very early in my programming life. Microsoft (and the .NET platform) also uses the day as the base unit for all of its date classes, and relegates the display of date information to a different set of classes.

I'm going to skip the DateTime structure because it's basically useless. It will give you no end of debugging problems with its asinine DateTime.Kind member. This past week I had to fix exactly this kind of thing at work.

Instead, use the DateTimeOffset structure. It represents an unambiguous point in time, with a double value for the date and a TimeSpan value for the offset from UTC. As Microsoft explains:

The DateTimeOffset structure includes a DateTime value, together with an Offset property that defines the difference between the current DateTimeOffset instance's date and time and Coordinated Universal Time (UTC). Because it exactly defines a date and time relative to UTC, the DateTimeOffset structure does not include a Kind member, as the DateTime structure does. It represents dates and times with values whose UTC ranges from 12:00:00 midnight, January 1, 0001 Anno Domini (Common Era), to 11:59:59 P.M., December 31, 9999 A.D. (C.E.).

The time component of a DateTimeOffset value is measured in 100-nanosecond units called ticks, and a particular date is the number of ticks since 12:00 midnight, January 1, 0001 A.D. (C.E.) in the GregorianCalendar calendar. A DateTimeOffset value is always expressed in the context of an explicit or default calendar. Ticks that are attributable to leap seconds are not included in the total number of ticks.

Yes. This is the way to do it. Except...well, you know what? Let's skip how the calendar has changed over time. (Short answer: the year 1 was not the year 1.)

In any event, DateTimeOffset gives you methods to calculate time and dates accurately across a 20,000-year range.

Which is to say nothing of time zones...

X is for XML vs. JSON

Blogging A to ZWelcome to the antepenultimate day (i.e., the 24th) of the Blogging A-to-Z challenge.

Today we'll look at how communicating between foreign systems has evolved over time, leaving us with two principal formats for information interchange: eXtensible Markup Language (XML) and JavaScript Object Notation (JSON).

Back in the day, even before I started writing software, computer systems talked to each other using specific protocols. Memory, tape (!) and other storage, and communications had significant costs per byte of data. Systems needed to squeeze out every bit in order to achieve acceptable performance and storage costs. (Just check out my computer history, and wrap your head around the 2400 bit-per-second modem that I used with my 4-megabyte 386 box, which I upgraded to 8 MB for $350 in 2018 dollars.)

So, if you wanted to talk to another system, you and the other programmers would work out a protocol that specified what each byte meant at each position. Then you'd send cryptic codes over the wire and hope the other machine understood you. Then you'd spend weeks debugging minor problems.

Fast forward to 1996, when storage and communications costs finally dropped below labor costs, and the W3C created XML. Now, instead of doing something like this:

METAR KORD 261951Z VRB06KT 10SM OVC250 18/M02 A2988

You could do something like this:

<?xml version="1.0" encoding="utf-8"?>
	<station name="Chicago O'Hare Field">KORD</station>
	<observationTime timeZone="America/Chicago" utc="2018-04-26T19:51+0000">2018-04-26 14:51</observationTime>
		<direction degrees="">Variable</direction>
		<speed units="Knots">6</speed>
	<visibility units="miles">10</visibility>
		<layer units="feet" ceiling="true" condition="overcast">25000</layer>
	<temperature units="Celsius">18</temperature>
	<dewpoint units="Celsius">-2</dewpoint>
	<altimeter units="inches Hg">29.88</altimeter>

The XML only takes up a few bytes (612 uncompressed, about 300 compressed), but humans can read it, and so can computers. You can even create and share an XML Schema Definition (XSD) describing what the XML document should contain. That way, both the sending and receiving systems can agree on the format, and change it as needed without a lot of reprogramming.

To display XML, you can use eXtensible Style Language (XSL), which applies CSS styles to your XML. (My Weather Now project uses this approach.)

Only a few weeks later, Douglas Crockford defined an even simpler standard: JSON. It removes the heavy structure from XML and presents data as a set of key-value pairs. Now our weather report can look like this:

  "weatherReport": {
    "station": {
      "name": "Chicago O'Hare Field",
      "icao code": "KORD"
    "observationTime": {
      "timeZone": "America/Chicago",
      "utc": "2018-04-26T19:51+0000",
      "local": "2018-04-26 14:51 -05:00"
    "winds": {
      "direction": { "text": "Variable" },
      "speed": {
        "units": "Knots",
        "value": "6"
    "visibility": {
      "units": "miles",
      "value": "10"
    "clouds": {
      "layer": {
        "units": "feet",
        "ceiling": "true",
        "condition": "overcast",
        "value": "25000"
    "temperature": {
      "units": "Celsius",
      "value": "18"
    "dewpoint": {
      "units": "Celsius",
      "value": "-2"
    "altimeter": {
      "units": "inches Hg",
      "value": "29.88"

JSON is easier to read, and JavaScript (and JavaScript libraries like JQuery) can parse it natively. You can add or remove key-value pairs as needed, often without the receiving system complaining. There's even a JSON Schema project that promises to give you the security of XSD.

Which format should you use? It depends on how structured you need the data to be, and how easily you need to read it as a human.

More reading:

Three on climate change

Earlier this week, the Post reported on data that one of the scariest predictions of anthropogenic climate change theory seems to be coming true:

The new research, based on ocean measurements off the coast of East Antarctica, shows that melting Antarctic glaciers are indeed freshening the ocean around them. And this, in turn, is blocking a process in which cold and salty ocean water sinks below the sea surface in winter, forming “the densest water on the Earth,” in the words of study lead author Alessandro Silvano, a researcher with the University of Tasmania in Hobart.

In other words, the melting of Antarctica’s glaciers appears to be triggering a “feedback” loop in which that melting, through its effect on the oceans, triggers still more melting. The melting water stratifies the ocean column, with cold fresh water trapped at the surface and warmer water sitting below. Then, the lower layer melts glaciers and creates still more melt water — not to mention rising seas as glaciers lose mass.

"The idea is that this mechanism of rapid melting and warming of the ocean triggered sea level rise at other times, like the last glacial maximum, when we know rapid sea level rise was five meters per century,” Silvano said. “And we think this mechanism was the cause of rapid sea-level rise.”

Meanwhile, Chicago magazine speculates about what these changes will mean to our city in the next half-century:

Can Chicago really become a better, maybe even a far better, place while much of the world suffers the intensifying storms and droughts resulting from climate change? A growing consensus suggests the answer may be a cautious yes. For one, there’s Amir Jina, an economist at the University of Chicago who studies how global warming affects regional economies. In the simulations he ran, as temperatures rise, rainfall intensifies, and seas surge, Chicago fares better than many big U.S. cities because of its relative insulation from the worst ravages of heat, hurricanes, and loss of agriculture.

Indeed, the Great Lakes could be considered our greatest insurance against climate change. They contain 95 percent of North America’s supply of freshwater—and are protected by the Great Lakes Water Compact, which prohibits cities and towns outside the Great Lakes basin from tapping them. While aquifers elsewhere run dry, Chicago should stay flush for hundreds of years to come.

“We’re going to be like the Saudi Arabia of freshwater,” says David Archer, a professor of geophysical science at the University of Chicago. “This is one of the best places in the world to live out global warming.”

There’s just one problem: Water, which should be our salvation, could also do us in.

The first drops of the impending deluge have already fallen. Every one-degree rise in temperature increases the atmosphere’s capacity to hold water vapor by almost 4 percent. As a result, rain and snow come down with more force. Historically, there’s been a 4 percent chance of a storm occurring in any given year in Chicago that drops 5.88 inches of rain in 48 hours—a so-called 25-year storm. In the last decade alone, we have had one 25-year storm, plus a 50-year storm and, in 2011, a 100-year storm. In the best-case scenario, where carbon emissions stay relatively under control, we’re looking at a 25 percent increase in the number of days with extreme rainfall by the end of the century. The worst-case scenario sees a surge of 60 percent. Precipitation overall may increase by as much as 30 percent.

And in today's Times, Justin Gillis and Hal Harvey argue that cars are ruining our cities as well as our climate:

[T]he truth is that people who drive into a crowded city are imposing costs on others. They include not just reduced mobility for everyone and degraded public space, but serious health costs. Asthma attacks are set off by the tiny, invisible soot particles that cars emit. Recent research shows that a congestion charge in Stockholm reduced pollution and sharply cut asthma attacks in children.

The bottom line is that the decision to turn our public streets so completely over to the automobile, as sensible as it might have seemed decades ago, nearly wrecked the quality of life in our cities.

We are revealing no big secrets here. Urban planners have known all these things for decades. They have known that removing lanes to add bike paths and widen sidewalks can calm traffic, make a neighborhood more congenial — and, by the way, increase sales at businesses along that more pleasant street. They have known that imposing tolls with variable pricing can result in highway lanes that are rarely jammed.

We're adapting, slowly, to climate change. Over my lifetime I've seen the air in Chicago and L.A. get so much cleaner I can scarcely remember how bad it was growing up. (Old photos help.) But we're in for some pretty big changes in the next few years. I think Chicago will ultimately do just fine, except for being part of the world that has to adapt more dramatically than any time in the last few thousand years.

W is for while (and other iterators)

Blogging A to ZWe're in the home stretch. It's day 23 of the Blogging A-to-Z challenge and it's time to loop-the-loop.

C# has a number of ways to iterate over a collection of things, and a base interface that lets you know you can use an iterator.

The simplest ways to iterate over code is to use while, which just keeps looping until a condition is met:

var n = 1;
while (n < 6)
	Console.WriteLine($"n = {n}");

while is similar to do:

var n = 1;
	Console.WriteLine($"n = {n}");
} while (n < 6);

The main difference is that the do loop will always execute once, but the while loop may not.

The next level up is the for loop:

for (var n = 1; n < 6; n++)
	Console.WriteLine($"n = {n}");

Similar, no?

Then there is foreach, which iterates over a set of things. This requires a bit more explanation.

The base interface IEnumerable and its generic equivalent IEnumerable<T> expose a single method, GetEnumerator (or GetEnumerator<T>) that foreach uses to go through all of the items in the class. Generally, anything in the BCL that holds a set of objects implements IEnumerable: System.Array, System.Collections.ICollection, System.Collections.Generic.List<T>...and many, many others. Each of these classes lets you manipulate the set of objects the thing contains:

var things = new[] { 1, 2, 3, 4, 5 }; // array of int, or int[]
foreach(var it in things)

foreach will iterate over all the things in the order they were added to the array. But it also works with LINQ to give you even more power:

var things = new List<int> {1, 2, 3, 4, 5};
foreach (var it in things.Where(p => p % 2 == 0))

Three guesses what that snippet does.

These keywords and structures are so fundamental to C#, I recommend reading up on them

V is for var

Blogging A to ZFor my second attempt at this post (after a BSOD), here (on time yet!) is day 22 of the Blogging A-to-Z challenge.

Today's topic: the var keyword, which has sparked more religious wars since it emerged in 2007 than almost every other language improvement in the C# universe.

Before C# 3.0, the language required you to declare every variable explicitly, like so:

using System;
using InnerDrive.Framework.Financial;

Int32 x = 123; // same as int x = 123;
Money m = 123;

Starting with C# 3.0, you could do this instead:

var i = 123;
var m = new Money(123);

As long as you give the compiler enough information to infer the variable type, it will let you stop caring about the type. (The reason line 2 works in the first example is that the Money struct can convert from other numeric types, so it infers what you want from the assignment. In the second example, you still have to declare a new Money, but the compiler can take it from there.)

Some people really can't stand not knowing what types their variables are. Others can't figure it out and make basic errors. Both groups of people need to relax and think it through.

Variables should convey meaning, not technology. I really don't care whether m is an integer, a decimal, or a Money, as long as I can use it to make the calculations I need. Where var gets people into trouble is when they forget that the compiler can't infer type from the contents of your skull, only the code you write. Which is why this is one of my favorite interview problems:

var x = 1;
var y = 3;
var z = x / y;

// What is the value of z?

The compiler infers that x and y are integers, so when it divides them it comes up Because 1/3 is less than 1, and .NET truncates fractions when doing integer math.

In this case you need to do one of four things:

  • Explicitly declare x to be a floating-point type
  • Explicitly declare y to be a floating-point type
  • Explicitly declare the value on line 1 to be a floating-point value
  • Explicitly declare the value on line 2 to be a floating-point value
// Solution 1:

double x = 1;
int y = 3;
var z = x / y;

// z = 0.333...

// Solution 3:

var x = 1f;
var y = 3;
var z = x / y;

// z == 0.333333343

(I'll leave it as an exercise for the reader why the last line is wrong. Hint: .NET has three floating-point types, and they all do math differently.)

Declaring z to be a floating-point type won't help. Trust me on this.

The other common reason for using an explicit declaration is when you want to specify which interface to use on a class. This is less common, but still useful. For example, System.String implements both IEnumerable and IEnumerable<char>, which behave differently. Imagine an API that accepts both versions and you want to specify the older, non-generic version:

var s = "The lazy fox jumped over the quick dog.";
System.Collections.IEnumerable e = s;


Again, that's an unusual situation and not the best code snippet, but you can see why this might be a thing. The compiler won't infer that you want to use the obsolete String.IEnumerable implementation under most circumstances. This forces the issue. (So does using the as keyword.)

In future posts I may come back to this, especially if I find a good example of when to use an explicit declaration in C# 7.