Okay, to start out with, here's the rule that most people like to follow when they round numbers:
If the number after the least significant digit is 0-4, round down.If the number after the least significant digit is 5-9, round up.
For example, if you're rounding to the nearest 1/10, you would round 3.20, 3.21, 3.22, 3.23, 3.24 down to 3.2, and you would round 3.25, 3.26, 3.27, 3.28, 3.29 up to 3.3.
Unfortunately, that's wrong. Close but no cigar.
The real rounding rule is this:
If the number after the least significant digit is 0-4, round down.If the number after the least significant digit is 6-9, round up.
If the number after the least significant digit is 5, round to the nearest even number.
So in the example above, 3.25 should actually round down to 3.2, and all the other roundings would be the same. Also, 3.15 would round to 3.2 (because .2 is the closest even number to .15), and 3.35 would round to 3.4 (because .4 is the closest even number to .35).
The reason that the "5" rule exists is to eliminate rounding bias. There are normally two explanations given for this.
1. Assuming that a zero after the least significant digit doesn't count (because if there's a zero then you're not really rounding, are you?), then that leaves four numbers that will always round down (1-4), four numbers that will always round up (6-9), and one number that will alternate regularly between rounding up and rounding down.
2. Back when English accountants used to have to deal with a lot of halfpennies, it didn't make sense to always round a ha'penny (which would be 0.5 pence) up, because then it would always be treated as a full penny. So the "5" rule kept all of the rounding of monetary accounts honest.
Okay, with all that behind us (keep your eyes open for another minute or two), a lot of people think that rounding is "broken" when a number like 3.25 rounds to 3.2 instead of 3.3. That's not broken, that's just following the normal rules of statistics. So when I heard someone saying at Lotusphere that the @Round function was "broken" in Notes 6.5, I just wrote it off to that. It's a common misconception, and it can be very frustrating to people.
But then the other day I wrote the following formula to show the results of rounding numbers:
@Prompt([Ok]; "@Round Test"; "3.05 = " + @Text(@Round(3.05; 0.1)) + @Char(10) + "3.15 = " + @Text(@Round(3.15; 0.1)) + @Char(10) + "3.25 = " + @Text(@Round(3.25; 0.1)) + @Char(10) + "3.35 = " + @Text(@Round(3.35; 0.1)) + @Char(10) + "3.45 = " + @Text(@Round(3.45; 0.1)) + @Char(10) + "3.55 = " + @Text(@Round(3.55; 0.1)) + @Char(10) + "3.65 = " + @Text(@Round(3.65; 0.1)) + @Char(10) + "3.75 = " + @Text(@Round(3.75; 0.1)) + @Char(10) + "3.85 = " + @Text(@Round(3.85; 0.1)) + @Char(10) + "3.95 = " + @Text(@Round(3.95; 0.1)) );
I ran the formula in my Notes 6.51 client, and was surprised at these results:
3.05 = 3 3.15 = 3.1 3.25 = 3.3 3.35 = 3.4 3.45 = 3.5 3.55 = 3.5 3.65 = 3.7 3.75 = 3.8 3.85 = 3.9 3.95 = 4
Well I'll be darned. That doesn't look right at all. Am I doing/expecting something wrong there, or is the @Round function truly broken? Does anyone get the same results in 6.0x or 6.53? I just ran this on my 5.0.10 client, and got the same thing.
I'd like to think that I must just be making a mistake (a rather common occurrence), but in LotusScript I get 3.0, 3.2, 3.2, 3.4, 3.4, 3.6, 3.6, 3.8, 3.8, 4 (using Print "3.05 = " & Round(3.05, 1), etc.), which is what I would expect.
If you didn't know, all of the projects on the OpenNTF site have little PayPal donation links at the bottom of the "Snapshot" tab for the project (that's the default page/tab when you select a project to view). By clicking the PayPal link, all of the money that you donate goes directly to the person or team of people who develop and support the project.
Even if it's a small amount of money, it is truly appreciated. And I'm not saying that from a "give me more money" point of view, I just know that many of us put a lot of our personal time into these things -- time that could otherwise be spent with our families, for example. Getting just a little something back feels really nice.
One of the things I was trying to do was use it for logging (yeah, I've always got to have an OpenLog tie-in), and I asked Damien about the possibility of capturing both the in-memory and on-disk version of the document when it's about to be modified, so I could log changes. He's thinking about it, and also posed the question on his blog. If you have any good thoughts on this, please leave him a comment. I think that would be interesting functionality.
It seems that Thomas Gumz did a lot of the UI work on the AgentBoost database. Looks nice. Maybe someday Thomas will have a website, too.
;-)
(UPDATE: just so I won't leave you hanging with talk of "this mysterious agent I wrote that won't work", here's the code I was dorking around with: AgentBoostLog.lss)
(UPDATE #2: wow, that was fast! Thanks to Andrew Tetlaw and "anonymous" on Damien's blog for pointing out that I could delete the DocumentContext reference before getting the doc by UNID, to get both the on-disk and the in-memory version of the doc. I guess that was the "Ask The Audience" lifeline we just used. An updated and seemingly fixed version of the agent is at: AgentBoostLog2.lss)
On Ed's site, I saw that "Activity Explorer" translates into "Activity Pimp". "Hmm," thought Julian, "maybe that's the answer to the IBM product naming dilemma." We all complain about the long and impossible to remember product renaming that the Lotus software packages have been suffering through (Sametime became Lotus Instant Messaging and Web Conferencing, etc.), and we've been clamoring to get the names shortened back to what they were before.
But maybe they don't necessarily need to be shorter. Maybe they just need to be more... memorable. A quick run through Gizoogle could give us some product names that would really make the customers say, "I gotta get me some of that."
Here's what I came up with (using the Textilizer, a hip hop dictionary, and a little poetic license):
old name: Lotus Domino Unified Communications
new name: Lizzle Dominizzle MC fo tha Peopleold name: Lotus Domino Document Manager
new name: Lizzle Dominizzle Rap Sheet Pimpold name: Lotus Instant Messaging and Web Conferencing
new name: Lizzle Dominizzle 411 wit a Shout Out to tha Homeysold name: Lotus Team Workplace
new name: Lizzle Dominizzle Bootylicious Gangsta Houseold name: IBM Workplace Services Express
new name: I-Bizzle-Eminem Quick Trick at tha Cribold name: IBM Workplace Rich Client
new name: I-Bizzle-Eminem Bling Bling Johnny
Yep, that's what'll pull the scratch outta the phat pockets, dawg.
Admittedly, I have neither the business nor the technological acumen that the folks at Google do, so it's pretty silly of me to second-guess what they're planning to do in the browser arena. But then again, most of the other people who are spreading rumors and ideas about this whole thing also lack such acumen, so I'll just close my eyes and jump into the fray.
On the one hand, I'm sure Mr. Goodger was hired because he's a smart guy (I don't know him, but it's a reasonable assumption). Fair enough. "Google hires smart guy." Not much of a news story. But the more I think about what Google has been doing recently, the more sense it makes that they've hired that particular smart guy.
Gmail, Google Suggest, and Google Maps are three of the hot things that have come out of the Google labs recently. They're still based around that essential component of all things Google (searching) but they also step into some murky waters -- web browser UI development. Not slick graphics and pretty boxes sort of UI, but rather the dynamic manipulation of browser pages and content.
A lot of what they've done as far as that goes (DHTML tricks, background XmlHttp and iFrame requests) is impressive, but you know that the guys who are doing the programming there have got to be banging their heads against the wall with these sorts of projects, cursing the fact that it's so convoluted to get this stuff to work sometimes. It's amazing that they've been able to do what they've done so far, just like it's amazing that someone might be able to take an old Honda and rework the engine and make it go 180 mph, but at some point they've just got to be telling themselves, "Wouldn't it be so much better if the technology had been built around doing this in the first place?"
Enter Firefox, the little browser that could. It's rapidly taking over Internet "market share" from Internet Explorer, meaning it's moved from a cute little piece of software to one that web site designers are actively developing for and testing against. As such, it's in a position where it can potentially affect the creation of new web standards and offer new features that become mainstream (and expected) capabilities. It can push the envelope, and people will notice.
While I don't see why Google would want to "own" the Firefox code (because that would require a tremendous amount of maintenance), and I don't see why they would want to come out with their own browser (because why does the world need another browser, really), I can certainly see why they would want to influence the development of the Firefox browser, especially now that it's reaching its critical mass.
How convenient would it be for the people who are working on Google Maps or Google Suggest (or whatever other amazing things those people are working on) to walk past the office of the lead Firefox developer and make little comments like, "It was good to have you at the party last night. By the way, it would be really cool if we could do this one thing on a browser..." And maybe instead of having to make iFrames do backflips, some little bit of magic will make it into the codestream of the next release.
Just like Ray Ozzie's comment about being able to play around with a "scan unread" option in Lotus Notes e-mail because he "owned the coding pencil".
Or from the business standpoint, it's like IBM sponsoring the Eclipse project. They have first-hand input into the development of this open-source solution that they're using to build their commercial software (Websphere Studio). Because it's open-source, all the anti-establishment hacker types out there are more than happy to spend their weekends working on Eclipse plug-ins and bug fixes (would they do the same for Big Blue itself? umm, no). Because it's free, Eclipse can claim a larger market share, thus pushing Eclipse-compatible and therefore Websphere Studio-compatible components.
So maybe it's not about Google owning the browser, maybe it's about Google pushing the direction of the browser. The Internet Explorer team has been planting "Microsoft-friendly" code and components into their product for years now... perhaps Google sees the value there.
The first agent I wrote was this:
s1$ = "a" s2$ = "b" startTime! = Timer() For i& = 1 To 2000000 s$ = s1$ + s2$ Next totalTime! = Round(Timer() - startTime!, 2) Print "Time Elapsed: " & totalTime! & " seconds (" & _ (totalTime! / i&) & " seconds per concatenation)"
(Okay, to be totally honest, my first agent used s$ = "a" + "a"
, but then I realized that the compiler was optimizing the concatenation somehow, probably because I was using constants. So I switched to s1$ and s2$. But anyway...)
Running the agent 5 times, I averaged a total time of 1.90 seconds per agent run, or 0.00000095 seconds per concatenation. Using "&" instead of "+", I averaged 1.93 seconds per agent run, or 0.000000965 seconds per concatenation.
But then I realized that the concatenation itself is actually faster than that, because there's the tiny amount of overhead for the looping, as well as the tiny amount of overhead for variable assignment. So I ran the agent again using:
s$ = "ab"
within the loop, so I could subtract out all the activity other than the concatenation itself. The "non-concatenating" agent actually took 1.40 seconds, which makes me believe that the concatenation alone (all looping and variable assignments aside) was really taking .50 and .53 seconds respectively, which ends up being 0.00000025 seconds for "+" concatenation and 0.000000265 seconds for "&" concatenation.
As far as my programs are concerned, that's (A) instantaneous and (B) identical.
I didn't stop there, of course. Here are some result comparisons, all using 2,000,000 iterations, averaging the times from 5 agent runs, and subtracting out the 1.4 seconds of "non-concatenation" time in each case:
concatenation | total time (2,000,000 iterations) |
---|---|
s1$ = "a" s2$ = "b" s$ = s1$ + s2$ | 0.50 seconds |
s1$ = "a" s2$ = "b" s$ = s1$ & s2$ | 0.53 seconds |
s1$ = Space(1000) s2$ = Space(1000) s$ = s1$ + s2$ | 5.06 seconds |
s1$ = Space(1000) s2$ = Space(1000) s$ = s1$ & s2$ | 5.07 seconds |
s1$ = "a" s2% = 1 s$ = s1$ + Cstr(s2%) | 1.69 seconds |
s1$ = "a" s2% = 1 s$ = s1$ & s2% | 1.65 seconds |
Draw your own conclusions (including the conclusion that I did a crappy job of testing, which is fine).
Keep in mind that we're just talking about the concatenation itself here, not the additional overhead required by doing such things as building a large string by concatenating one or more strings over and over. If you're going to build a string that way, using more than a hundred or so concatenations (as I did with my XmlChars class, for example), you might want to consider using my LotusScript StringBuffer class instead.
But that's an efficiency discussion for another day...
It took all of about 3 lines of code to add this to my site (see instuctions on the Y!Q info page), so in a few minutes I was ready to roll. You really only have to add a reference to a .js file at the top of the page, add one or more form blocks that display the little "Possibly Related Stuff" link(s) wherever you want them, and then decide what you want to use as your "content" block. Jeremy is just using his blog entry title, and the Y!Q page mentions that you can use an entire paragraph, but I decided to add a keywords field for each blog entry and let Yahoo key off of that. Any of my blog entries that I've added keywords to will have a Y!Q link at the bottom of the entry, the other ones won't. I'll decide whether I want to go back and add keywords to my old entries as I determine how useful this is.
If you're reading this via an RSS reader, go ahead and click through the permalink to my actual site to see how it looks. It's pretty cool -- when you click the "Possibly Related Stuff" link at the bottom of the blog entry, a little window pops up with up to 5 potentially relevant results, based on the content block I've defined.
In my brief initial testing, it seemed that having too many keywords in the content tag would return almost no results. This is probably because they're still tweaking the Y!Q code on the backend (interestingly enough, using "Y!Q blog" as keywords only returned a single result -- to some kind of spanish page). I can see how this could end up being useful as it matures, to add semantic threads to blog (and even news) content. Good idea.
But anyway, I work downtown, and I walked to "The Landing" for lunch yesterday. The Landing is a pseudo-mall right off the river, next to the Adams Mark hotel and only a couple miles from the stadium. They're expecting a lot of Superbowl foot traffic there, and it seems like every unused nook and cranny is housing someone sellling either beer or t-shirts.
So when I went to the food court at the Landing to get a slice of pizza, I looked up at the prices and almost laughed out loud. Pizza was $4 per slice, drinks were $3 each (water was $0.90). My two slices of pizza and a Coke were going to cost $11. This was at least double what the normal prices are, and more what I would expect to pay at a movie theater, not an open food court.
Apparently all of the restaurants in the food court jacked up their prices this week. One of my coworkers went to order a burrito at the same place he orders one every week, and they warned him to look at the menu before he got any food. His normal $5.50 lunch plate was well over $10 now. Until next week, anyway.
Maybe this is normal business, though. I lived and worked in downtown Atlanta during the Olympics in 1996. Several of the restaurants there gave out "locals cards" to their regular customers just before the games began, and if you showed your locals card when you ate there they would charge you the normal amount for food, not the hyper-inflated rate they charged to everyone else.
So for anyone who came down to Jax for the Superbowl and found things a bit pricey, please know that it's not usually this bad. It's only expensive when we have guests coming to town.
Man, that ended up being one long page of words. I hope you all enjoyed the "coverage" I provided.