The Daily Parker

Politics, Weather, Photography, and the Dog

Second, third, and fourth looks

Every so often I like to revisit old photos to see if I can improve them. Here's one of my favorites, which I took by the River Arun in Amberley, West Sussex, on 11 June 1992:

The photo above is one of the first direct-slide scans I have, which I originally published here in 2009, right after I took this photo at nearly the same location:

(I'm still kicking myself for not getting the angle right. I'll have to try again next time I'm in the UK.)

Those are the photos as they looked in 2009. Yesterday, during an extended internet outage at my house, I revisited them in Lightroom. Here's the 1992 shot, slightly edited:

And the 2009 shot, with slightly different treatment:

A side note: I did revisit Amberley in 2015, but I took the path up from Arundel instead of going around the northern path back into Amberley as in 2009, so I didn't re-shoot the bridge. Next time.

Chicago coyotes: how are they thriving?

Darryl Fears, writing for the Washington Post today, highlights a new study that explains why coyotes have adapted so well to human environments:

As mountain lions and wolf packs disappeared from the landscape, coyotes took advantage, starting a wide expansion eastward at the turn of the last century into deforested land that continues today.

For reasons biologists do not quite understand, coyotes prefer open land over forest. It could be that bigger predators that kill them over territory and competition for food could better sneak up on them in forests, [Roland Kays, a research associate professor at North Carolina State University and the North Carolina Museum of Natural Sciences] theorized. But now, cameras have caught coyotes in forests where the apex predators have largely been removed, opening the prospect that coyotes could continue to move into territories where they have never been, such as into South America.

Unlike mountain lions, wolves and bears that were hunted to near-extinction in state-sponsored predator-control programs, coyotes do not give in easily, Kays said. “Coyotes are the ultimate American survivor. They have endured persecution all over the place. They are sneaky enough. They eat whatever they can find — insects, smaller mammals, garbage,” he said.

I've reported on coyotes before, in part because I'm happy they've found a home in Chicago. I've even seen them on my street, no more than 50 meters away from me.

The Cook County Forest Preserve District has some FAQs on coyotes, including what to do if one takes an interest in you.

Sleep in on the weekends if you can

A Swedish psychologist has preliminary data that suggest sleeping in on the weekends can make up for some sleep loss during the week, maybe:

Sleeping in on a day off feels marvelous, especially for those of us who don't get nearly enough rest during the workweek. But are the extra weekend winks worth it? It's a question that psychologist Torbjorn Akerstedt, director of the Stress Research Institute at Stockholm University, and his colleagues tried to answer in a study published Wednesday in the Journal of Sleep Research.

Akerstedt and his colleagues grouped the 38,000 Swedes by self-reports of sleep duration. Short sleepers slept for less than five hours per night. Medium sleepers slept the typical seven hours. Long sleepers, per the new study, snoozed for nine or more hours.

The researchers further divided the groups by pairing their weekday and weekend habits. Short-short sleepers got less than five hours a night all week long. They had increased mortality rates. Long-long sleepers slept nine or more hours every night. They too had increased mortality rates.

The short-medium sleepers, on the other hand, slept less than five hours on weeknights but seven or eight hours on days off. Their mortality rates were not different from the average.

Personally, getting 9 hours seems like a luxury. But I haven't been getting 7 enough lately. I have a dream that someday I will have a full week of 7+ hour nights again. I last had this happen in January.

It was 20 years ago today

On 13 May 1998, just past midnight New York time, I posted my first joke on my brand-new braverman.org website from my apartment in Brooklyn.

My first website of any kind was a page of links I maintained for myself starting in April 1997. Throughout 1997 and into 1998 I gradually experimented with Active Server Pages, the new hotness, and built out some rudimentary weather features. That site launched on 19 August 1997.

By early April 1998, I had a news feed, photos, and some other features. On April 2nd, I reserved the domain name braverman.org. Then on May 6th, I launched a redesign that filled out our giant 1024 x 768 CRT displays. Here's what it looked like; please don't vomit:

On May 13th, 20 years ago today, I added a Jokes section. That's when I started posting things for the general public, not just for myself, which made the site a proto-blog. That's the milestone this post is commemorating.

Shortly after that, I changed the name to "The Write Site," which lasted until early 2000.

In 1999, Katie Toner redesigned the site. The earliest Wayback Machine image shows how it looked after that. Except for the screenshot above, I have no records of how the site looked prior to Katie's redesign, and no easy way of recreating it from old source code.

I didn't call it a "blog" until November 2005. But on the original braverman.org site, I posted jokes, thoughts, news, my aviation log, and other bits of debris somewhat regularly. What else was it, really?

Today, The Daily Parker has 6,209 posts in dozens of categories. Will it go 20 more years? It might. Stick around.

6,000

This month will see two important Daily Parker milestones. This is the first one: the 6,000th post since braverman.org launched as a pure blog in November 2005. The 5,000th post was back in March 2016, and the 4,000th in March 2014, so I'm trucking along at just about 500 a year, as this chart shows:

Almost exactly four years ago I predicted the 6,000th post would go up in September. I'm glad the rate has picked up a bit. (New predictions: 7,000 in May 2020 and 10,000 in April 2026.)

Once again, thanks for reading. And keep your eyes peeled for another significant Daily Parker milestone in a little less than two weeks.

List of 2018 A-to-Z topics

Blogging A to ZHere's the complete list of topics in the Daily Parker's 2018 Blogging A-to-Z challenge on the theme "Programming in C#":

Generally I posted all of them at noon UTC (7am Chicago time) on the proper day, except for the ones with stars. (April was a busy month.)

I hope you've enjoyed this series. I've already got topic ideas for next year. And next month the blog will hit two huge milestones, so stay tuned.

Z is for Zero

Blogging A to ZToday is the last day of the 2018 Blogging A-to-Z challenge. Today's topic: Nothing. Zero. Nada. Zilch. Null.

The concept of "zero" only made it into Western mathematics just a few centuries ago, and still has yet to make it into many developers' brains. The problem arises in particular when dealing with arrays, and unexpected nulls.

In C#, arrays are zero-based. An array's first element appears at position 0:

var things = new[] { 1, 2, 3, 4, 5 };
Console.WriteLine(things[1]);

// -> 2

This causes no end of headaches for new developers who expect that, because the array above has a length of 5, its last element is #5. But doing this:

Console.WriteLine(things[5]);

...throws an IndexOutOfRange exception.

You get a similar problem when you try to read a string, because if you recall, strings are basically just arrays of characters:

var word = "12345";
Console.WriteLine(word.Substring(4));

// 5

Console.WriteLine(word.Substring(5));

// IndexOutOfRange exception

The funny thing is, both the array things and the string word have a length of 5.

The other bugaboo is null. Null means nothing. It is the absence of anything. It equals nothing, not even itself (though this, alas, is not always true).

Reference types can be null, and value types cannot. That's because value types always have to have a value, while reference types can simply be a reference to nothing. That said, the Nullable<T> structure gives value types a way into the nulliverse that even comes with its own cool syntax:

int? q = null;
int r = 0;
Console.WriteLine(q ?? 0 + r);
// 0

(What I love about this "struct?" syntax is you can almost hear it in a Scooby Doo voice, can't you?)

Line 1 defines a nullable System.Int32 as null. Line 2 defines a bog-standard Int32 equal to zero. If you try to add them, you get a NullReference exception. So line 3 shows the coalescing operator that basically contracts both of these statements into a succinct little fragment:

// Long form:
int result;
if (q.HasValue)
{
	result = q.Value + r;
}
else
{
	result = 0 + r;
}

// Shorter form:
int result = (q.HasValue ? q.Value : 0) + r;

// Shortest form:
int result = q ?? 0 + r;

And so the Daily Parker concludes the 2018 Blogging A-to-Z challenge with an entire post about nothing. I hope you've enjoyed the posts this month. Later this morning, I'll post the complete list of topics as a permanent page. Let me know what you think in the comments. It's been a fun challenge.

Y is for Y2K (and other date/time problems)

Blogging A to ZI should have posted day 25 of the Blogging A-to-Z challenge. yesterday, but life happened, as it has a lot this month. I'm looking forward to June when I might not have the over-scheduling I've experienced since mid-March. We'll see.

So it's appropriate that today's topic involves one of the things most programmers get wrong: dates and times. And we can start 20 years ago when the world was young...

A serious problem loomed in the software world in the late 1990s: programmers, starting as far back as the 1950s, had used 2-digit fields to represent the year portion of dates. As I mentioned Friday, it's important to remember that memory, communications, and storage cost a lot more than programmer time until the last 15 years or so. A 2-digit year field makes a lot of sense in 1960, or even 1980, because it saves lots of money, and why on earth would people still use this software 20 or 30 years from now?

You can see (or remember) what happened: the year 2000. If today is 991231 and tomorrow is 000101, what does that do to your date math?

It turns out, not a lot, because programmers generally planned for it way more effectively than non-technical folks realized. On the night of 31 December 1999, I was in a data center at a brokerage in New York, not doing anything. Because we had fixed all the potential problems already.

But as I said, dates and times are hard. Start with times: 24 hours, 60 minutes, 60 seconds...that's not fun. And then there's the calendar: 12 months, 52 weeks, 365 (or 366) days...also not fun.

It becomes pretty obvious even to novice programmers who think about the problem that days are the best unit to represent time in most human-scale cases. (Scientists, however, prefer seconds.) I mentioned on day 8 that I used Julian day numbers very, very early in my programming life. Microsoft (and the .NET platform) also uses the day as the base unit for all of its date classes, and relegates the display of date information to a different set of classes.

I'm going to skip the DateTime structure because it's basically useless. It will give you no end of debugging problems with its asinine DateTime.Kind member. This past week I had to fix exactly this kind of thing at work.

Instead, use the DateTimeOffset structure. It represents an unambiguous point in time, with a double value for the date and a TimeSpan value for the offset from UTC. As Microsoft explains:

The DateTimeOffset structure includes a DateTime value, together with an Offset property that defines the difference between the current DateTimeOffset instance's date and time and Coordinated Universal Time (UTC). Because it exactly defines a date and time relative to UTC, the DateTimeOffset structure does not include a Kind member, as the DateTime structure does. It represents dates and times with values whose UTC ranges from 12:00:00 midnight, January 1, 0001 Anno Domini (Common Era), to 11:59:59 P.M., December 31, 9999 A.D. (C.E.).

The time component of a DateTimeOffset value is measured in 100-nanosecond units called ticks, and a particular date is the number of ticks since 12:00 midnight, January 1, 0001 A.D. (C.E.) in the GregorianCalendar calendar. A DateTimeOffset value is always expressed in the context of an explicit or default calendar. Ticks that are attributable to leap seconds are not included in the total number of ticks.

Yes. This is the way to do it. Except...well, you know what? Let's skip how the calendar has changed over time. (Short answer: the year 1 was not the year 1.)

In any event, DateTimeOffset gives you methods to calculate time and dates accurately across a 20,000-year range.

Which is to say nothing of time zones...

X is for XML vs. JSON

Blogging A to ZWelcome to the antepenultimate day (i.e., the 24th) of the Blogging A-to-Z challenge.

Today we'll look at how communicating between foreign systems has evolved over time, leaving us with two principal formats for information interchange: eXtensible Markup Language (XML) and JavaScript Object Notation (JSON).

Back in the day, even before I started writing software, computer systems talked to each other using specific protocols. Memory, tape (!) and other storage, and communications had significant costs per byte of data. Systems needed to squeeze out every bit in order to achieve acceptable performance and storage costs. (Just check out my computer history, and wrap your head around the 2400 bit-per-second modem that I used with my 4-megabyte 386 box, which I upgraded to 8 MB for $350 in 2018 dollars.)

So, if you wanted to talk to another system, you and the other programmers would work out a protocol that specified what each byte meant at each position. Then you'd send cryptic codes over the wire and hope the other machine understood you. Then you'd spend weeks debugging minor problems.

Fast forward to 1996, when storage and communications costs finally dropped below labor costs, and the W3C created XML. Now, instead of doing something like this:

METAR KORD 261951Z VRB06KT 10SM OVC250 18/M02 A2988

You could do something like this:

<?xml version="1.0" encoding="utf-8"?>
<weatherReport>
	<station name="Chicago O'Hare Field">KORD</station>
	<observationTime timeZone="America/Chicago" utc="2018-04-26T19:51+0000">2018-04-26 14:51</observationTime>
	<winds>
		<direction degrees="">Variable</direction>
		<speed units="Knots">6</speed>
	</winds>
	<visibility units="miles">10</visibility>
	<clouds>
		<layer units="feet" ceiling="true" condition="overcast">25000</layer>
	</clouds>
	<temperature units="Celsius">18</temperature>
	<dewpoint units="Celsius">-2</dewpoint>
	<altimeter units="inches Hg">29.88</altimeter>
</weatherReport>

The XML only takes up a few bytes (612 uncompressed, about 300 compressed), but humans can read it, and so can computers. You can even create and share an XML Schema Definition (XSD) describing what the XML document should contain. That way, both the sending and receiving systems can agree on the format, and change it as needed without a lot of reprogramming.

To display XML, you can use eXtensible Style Language (XSL), which applies CSS styles to your XML. (My Weather Now project uses this approach.)

Only a few weeks later, Douglas Crockford defined an even simpler standard: JSON. It removes the heavy structure from XML and presents data as a set of key-value pairs. Now our weather report can look like this:

{
  "weatherReport": {
    "station": {
      "name": "Chicago O'Hare Field",
      "icao code": "KORD"
    },
    "observationTime": {
      "timeZone": "America/Chicago",
      "utc": "2018-04-26T19:51+0000",
      "local": "2018-04-26 14:51 -05:00"
    },
    "winds": {
      "direction": { "text": "Variable" },
      "speed": {
        "units": "Knots",
        "value": "6"
      }
    },
    "visibility": {
      "units": "miles",
      "value": "10"
    },
    "clouds": {
      "layer": {
        "units": "feet",
        "ceiling": "true",
        "condition": "overcast",
        "value": "25000"
      }
    },
    "temperature": {
      "units": "Celsius",
      "value": "18"
    },
    "dewpoint": {
      "units": "Celsius",
      "value": "-2"
    },
    "altimeter": {
      "units": "inches Hg",
      "value": "29.88"
    }
  }
}

JSON is easier to read, and JavaScript (and JavaScript libraries like JQuery) can parse it natively. You can add or remove key-value pairs as needed, often without the receiving system complaining. There's even a JSON Schema project that promises to give you the security of XSD.

Which format should you use? It depends on how structured you need the data to be, and how easily you need to read it as a human.

More reading: