The Battle of Marathon

Credits: Greg London

I cannot speak of my crime.
But I can tell you of my punishment.
In 498 BC, the Greek city-state of Ionia rebelled against its Persian Tyrants. Athens and Eretria sent ships to help Ionia in the fight. The Ionians marched on the city of Sardes, home of the regional Persian governor, and burned the city. Athens sent its tyrant, Hippias, into exile, and turned towards democracy. But in 494 BC, the Persians defeated the Ionians at sea, ending their rebellion.
King Darius I ruled the Persian Empire at this time. With the Ionians defeated, Darius was left infuriated with Athens and Eretria for their part in the rebellion. The Athenian tyrant, Hippias, went to Persia and met with King Darius. Hippias offered his help in conquering Athens in exchange for his return to power.
Darius sent envoys throughout Greece with a demand that each city pay a tribute to the Persian Empire. Many cities paid the tribute out of fear. Athens and Sparta replied by throwing the envoys to their deaths. This further infuriated Darius and gave him another reason to attack.
In 492 BC, Darius sent his fleet against the Greeks, but they were wiped out by a storm near Mount Athos. In 490 BC, Darius sent a fleet of six hundred triremes under the command of two generals: his nephew, Artiphernes, and a Mede named Datis. The Persian fleet attacked the city of Eretria first. The people of Eretria defended themselves from behind the city walls for nearly a week. But then the city was betrayed when two of Eretria’s chiefs let the Persians into the city.
The Persians sacked Eretria. Those not massacred were taken as slaves. Word of the Persian attack spread, and several Greek Generals met in Athens. They sent a professional runner named Phidippides to Sparta to request help. Phidippides covered the distance from Athens to Sparta, 150 miles, in two days.
At the same time, Hippias led the Persians to the island of Aegileia, where they dropped off their newly acquired Eretrian slaves. Hippias then brought the Persian fleet to the eastern shore near Marathon, an eight hour march from Athens.
While the Athenians waited for Sparta’s reply, the Greek Generals plotted their strategy. Rather than suffer the same fate as Eretria, the Generals decided to fight the Persians before they reached the city of Athens. Hoping to catch the Persians by surprise, the Athenian army marched to the plains of Marathon, twenty-two miles away.
The Athenians took their position on the high ground of a mountain range overlooking the plains of Marathon. They made camp above fifty-thousand Persians.
The Persian Army contained Infantry and Cavalry. They were lightly armored, wearing a tunic with metal plates, no helmet, and carrying a wicker and leather shield designed to stop arrows. Persians carried short spears, but their most fearsome weapon was the composite bow. The Persian King had once told the Greeks that his army’s arrows were so many that they could blot out the sun. To this, the Greeks replied, “Good, then we can fight in the shade.”
Upon seeing the Athenians, the Persians deployed their troops in standard line formation. They put their best troops, ten-thousand Persian Immortals, in the center and their poorer troops, conscripts and foreign fighters, on the wings. The cavalry took the flanks. They formed a line that was over half a mile wide and thirty men deep. Not wanting to attack the Athenians who held the high ground, the Persians made camp and waited.
The Greek Army was an army of Hoplite Infantry, named after their heavy shield, the hoplon. The Greek infantry wore heavy armor, a solid bronze chestplate and smaller plated armor covering the rest of their body. Their hoplon was made of wood and bronze. Their primary weapons were the long spear and short sword, and they would often deploy their troops in a phalanx formation with eight lines of men.
The Athenian Army reflected the Democratic notions of their city. The army was divided into tribes numbering one thousand men, and each tribe appointed a General. All ten Generals voted on all operations. The city of Athens appointed a man named Callimachus as Polemarch, or eleventh general, to prevent a tie from occurring during a vote. The Generals even took turns on a daily basis as to who commanded the whole army.
The Generals first voted to wait. The Persians would not want to attack as long as the Athenians held the high ground. And the Athenians were waiting for the Spartans to send help. While the Athenians waited, one thousand Plataeans arrived from the northwest to fight the Persians. This encouraged the Athenians. But then Phidippides returned with news that Sparta would not send help for another five days plus a number of days to march to Marathon.
The two armies faced one another in a stalemate. After eight days, the Athenians watched as thousands of Persians boarded their ships while others reinforced their positions. The Athenians were afraid that the Persians planned on leaving a force to contain them while the remainder of the Persian army sailed around the southern peninsula to attack Athens on the western shore.
The Athenian Generals were now evenly divided between fighting at Marathon and pulling back to Athens. General Miltiades thought the longer they waited, the worse their chances of success. Miltiades went to the Polemarch, Callimachus, to sway his vote. Miltiades gave Callimachus a rousing speech and convinced the Polemarch that they must fight the Persians without delay. When the vote was tallied, Miltiades’s ordered the army into battle.
Miltiades knew something of the way the Persians fought. He deliberately arranged the Greek army with a weakened centerline, four lines of Hoplites, and strong flanks, eight lines of Hoplites. Callimachus commanded the right wing. The Plataeans formed the left wing. Between them, the ten tribes lined up in order. In all, the Greek formation was well over half a mile wide.
On command, the Athenian Army marched in formation towards the Persians. The Persians were somewhat surprised by this, thinking the Athenians mad for giving up the high ground and engaging an army four times its size. The Persians readied themselves to fight. When the Athenian line was 200 yards distant, the Persians let loose a torrent of arrows to rain down on the Greeks.
At that moment, the Athenians charged the Persians at a run, and the arrows missed their targets. The Athenians closed with the Persians, in formation, on the double. The Greek’s charge caught the Persians by complete surprise, and the Persians prepared for hand to hand combat.
The two armies clashed and fought fiercely.
The Athenian flanks held their position, but their weaker middle was pushed back. The Persians drove through the center, only to be surrounded and completely enveloped by the Athenians. This was Miltiades’s plan; the Athenians engaged the Persians in close combat from all sides. The heavy arms and armor of the Athenian Army shattered the short spears and wicker shields of the Persians.
Their dead numbered in the thousands before the Persians managed to break out of the Athenian trap.
They retreated to the shore to reboard their ships. The Athenian’s chased them into the sea as the fighting became more chaotic.
Callimachus, the general and Polemarch, was killed here.
Stesilaus, one of the Tribe Generals, was killed here.
And, I can tell you of my fate as well.
I fought alongside an Athenian named Epizelus, son of Cyphagoras. Our ranks had broken, and the combat dissolved into wild melee. The Persians fought hard to board their ships. We pursued, but we managed to capture only seven ships out of their entire fleet.
Epizelus and I made our way near the shore. We reached the middle of the fighting, and two Medes charged us. We clashed furiously, and one of the Medes grazed my arm before I ran my spear into his chest.
Words that were not mine echoed through my head.
“That no man shall die by your hand.”
I looked at my arm and could see that my scratch had become a mortal wound. Epizelus was getting the better of the second Mede, when I saw the Giant coming towards us.
This Mede towered over the battlefield. His beard draped the entire length of his shield. He hefted his sword in the air and looked intent on splitting Epizelus down the middle.
I took one look at my wound, knew that I had broken my probation, and understood that I would pay for it with my life. I looked at the Mede and remembered my voice. I bellowed at him.
“May your eyes be blind to him!!! Take me!!!”
Epizelus finished his opponent and spun around to see this Mede, who stretched into the sky, walking towards him.
The Giant brushed past him and dropped his blade on my collarbone, collapsing my chest and shattering my spine. The Giant continued onward as if he did not see Epizelus.
I collapsed to the ground, dying as I fell.
Epizelus cried out that he could not see. Blindness had stricken him as it had stricken the Mede, only in different form.
As the life poured out of me, memories flashed before my eyes in an instant. My mind was numb with shock as I lay there dying.
I had been here at least three times before.
The remainder of the Persian Army scrambled aboard their ships, pulling away from shore. They picked up their prisoners on the island of Aegileia and sailed for Athens. They would attempt to reach Athens before the Greeks could return over land.
The Athenians had won this battle, but they could still yet lose the war. The entire Athenian Army moved with all possible speed to return home.
Phidippides ran ahead and covered the distance in three hours of porno gratis. When he arrived, he gave the news that the Persians had been defeated and that they were sailing around Attica to land at Athens. He then collapsed from exhaustion and soon died. The people manned the walls and made it appear that an army defended Athens. When the Persian ships arrived, they saw what appeared to be a well-defended city, and they hesitated.
Soon thereafter, the Athenian Army finished its own run from Marathon to Athens. When the Greek army came into view, the Persians decided they had suffered enough and sailed home.
In the end, 6400 Persians and 192 Athenians perished.
The city-state of Athens was saved, as was the fledgling democracy that it nurtured. The shape of the world changed because men took a stand, against all odds, for freedom.
Eons later, men and women would run twenty-some miles for a cash prize and call it a “Marathon.”
But the true prize is much more precious than Gold.
The prize is nothing short of freedom itself.

Credits: Greg London

Business leaders in the city have stepped forward to rescue the Pride festival.

Debts totalling more than £180,000 owed to more than 20 companies and charities have plunged the future of the major event into doubt.

Now Michael Deol and Robert Webb, the owners of Club Revenge in Old Steine, and James Ledward, editor of GScene magazine and Paul Kemp of Aeon Events, have formed a community interest company in a bid to run Pride in Brighton and Hove for 2012.

They have vowed that all profits will be ploughed back into LGBT charitable causes. According to plans submitted to Brighton and Hove City Council, Stagfleet Ltd, the owner of Club Revenge, will underwrite the whole event. The directors also own 10 venues across the city, including The Dorset, Hove Kitchen, Zafferelli’s and the Sackville Hotel site.

Mr Deol said that £1 of the price of every ticket sold will be ring-fenced and donated directly to the Rainbow Fund to distribute through its grants porno en programme to local LGBT/HIV organisations and charities that provide front line services to the LGBT community in Brighton and Hove.

He said: “We are delighted to have brought together what we think is a superb group of people to run and organise this year’s Pride in Brighton and Hove. “We have been negotiating on the future of the event with Brighton and Hove City Council and look forward to its decision.”

A spokesman for Brighton and Hove City Council said that the Stagfleet bid is one of two under consideration. He would not disclose the identity of the second bidder.

He said: “A decision will be made by Councillor Geoffey Bowden, cabinet member for culture recreation and tourism, on March 6.”

The Argus understands that according to the group’s plans the route of the march and the venue for the party in Preston Park will remain unchanged at the event planned for September 1.

Former vice chair of Pride (South East) Nick Beck said that lessons would be learned from previous failures.

He said: “There is no reason the event last year should not have made a profit.

“I am confident that with prudent money management the festival can be a commercial success and will raise money for LGBT causes.

“The business acumen of Mr Deol and Mr Webb is well-proven.”

A major festival has unveiled new plans.

The organiser of Pride in Brighton has launched an information hot- line. Trevor Edwards, director of Pride Brighton and Hove, said information about this year’s festival is available around the clock on 01273 257225.

Wilde Ones International has been appointed as production company for the event in Preston Park on September 1. Tickets will go on sale in April.

Brighton and Hove City Council has given the go-ahead to the com- munity interest company formed by Michael Deol and Robert Webb, the owners of Club Revenge in Old Steine, and James Ledward,
editor of GScene magazine, to organise the event this year.

Mr Edwards said that £1 of theprice of every ticket sold will be ring- fenced and donated directly to the Rainbow Fund to distribute through its grants programme to local LGBT/HIV organisations and chari- ties that provide frontline services to the LGBT community in Brighton and Hove.

He said the organisers want to involve more community groups than ever before.

Dean Parker, boss at Wilde Ones in London said: “We are looking forward to working with Pride and Brighton and Hove City Council this year and will be making a full announcement soon.”

Social web for the long-term

Now that the biggest waves of Buzz hype are hopefully behind us, it’s a good time concentrate what Google Buzz actually is and what it isn’t. I have followed Buzz with great interest and I’ve previously talked about Jaiku, feeds and discussions on the web on general here. I even pushed Plaxo at one point, but they are pretty much dead in the water right now. I was couple of years off and a technology wrong with my prediction of sort-of real-time web in 2008.

buzzwelcome-croppedIn a way I view Google Buzz as a reference platform, like Google Wave Preview, instead of a finished product. Of course, because Buzz is right there in Gmail’s interface, it’s Buzz deserves to get all the critical comments about its launch it got. It could be argued that without exposing it to the larger public at start, it would have been impossible to get all those great ideas to make it better. One interesting thing to note is that most requested features for Buzz are UI-related. However, I’m more interested in what makes Buzz work behind the scenes, because if Google can get the critical mass behind this, things are going to be great.

It was again a sad example of the sorry state of technology blogging when Buzz first hit the web. In that little world that’s so enamored with Twitter, Facebook and status updates, it never occurred to anyone that Google was aiming much higher. One of the worst offenders was the serial-troll Lyons. He was followed with lots of others who came up with as lame puns in their headlines without actually figuring out what they were looking at. Instead we got petty lists of “fails” in Buzz. Yeah, on jovencitas the surface that these Techmeme all-stars barely skim, Buzz might resemble Twitter, but the differences are pretty obvious from the start.

The attention spans are so incredibly short that that they have completely forgotten that even in this age of agile Web 2.0 iterative processes, things take time. This was probably best illustrated by this post, where the author totally oblivious to the lineage of Buzz claimed that

As always, time will tell whether this is a game-changer or just another Jaiku, the Twitter competitor that Google bought but never found a way to leverage.

In their defense, even Ars Technica got it wrong.

The only reason I can come up with why people associated Buzz instantly with Twitter was the simple user interface. Much more interesting comparisons would have been with Friendfeed (which kind of tried to do this in simple way), Yahoo Updates (which kind of tried to do this in a difficult way) or it’s genetical ancestor Jaiku (which kind of did this LBS twitter thing in a pretty nice package a good three years ago).

While I agree that Buzz is a rather odd combination of product/platform/project, I do find it exciting that Google has the resources to just try things. We are so early to this social web thing that if someone pretends that they know what exactly works, they’ll be proven wrong in a fortnight. Sure, I do agree that Google might be forgetting that what people want are applications and not technology (a mistake Nokia keeps on repeating, and one reason why they are so incredibly lost in the technology woods. Or like Yahoo, which just pumps out nice web tech with no apparent apps or revenue streams). Google has the money to experiment and the mindset to test things on a large scale. That takes balls. That’s what the whole world wide web was about in the first place, experimentation. You have to be pretty clueless if you take anything on the internet right now as granted.

Seriously, take a long view here. Even on the internet, you need some time to lay out the groundwork even when you’re working in the application layer. If you think about the 2,5 year timeline between Jaiku’s acquisition and Buzz, there were little hints along the way in many of Google’s products. To be able to have something like Buzz, Google had to first come up with a friend/follow system and a location system. You know like following other people on Google Reader and Google Latitude? The ADD-riddled tech bloggers were pretty hyped about Google Latitude and how it was going to kill Brightkite, Foursquare and other LBS services, but somehow Google Buzz failed to generate a single comparison to these services?

But all this is just technology. What about the revolution that I hope Google can pull with Buzz? What’s the beauty in Google Buzz? You only need to check Google’s API page for Google Buzz and you’ll soon realize that all the stuff behind what makes Google Buzz work are open standards, which enable pretty ground-breaking integrations that could just solve the mess discussion on the internet is right now.

As a sidenote, when tech bloggers complain how they can’t add this and that twitter stream to their Google Buzz timeline or how the tweets are not in real-time and all that, they would only need to look at that API and realize that because Google looking at the whole thing at much higher level, it’s actually the publisher who needs to find a way to enable a thing awkwardly called PubSubHubbub, and in that instant all the content is pretty much real-time. Of course, I have no idea if it is at all feasible to use PubSubHubbub in the scale of Twitter, but the point is that Google is not planning to have custom pipelines to Buzz, but to play with common, open protocols and APIs. Another point is that once your content works with Buzz, it works with any aggregator/social app that has decided to have that same common, open infrastructure.

So, instead of trying to centralize every user, every piece of content to their site, like Facebook and Twitter, Google has had the guts to try and harness all the discussion on the web to their service. It’s going to be a happy day when this post right here and all the discussion and the comment this might generate are all happily syndicated in Buzz.

The open nature of Buzz is not all news to some creatures on the web. On Twitter and Facebook you can follow and be followed by inanimate products and abstract brands and they can have pages and whatnot, but right now, to be able to take part in Buzz you need to have a Google Account and that means that you have to be a natural, real person and you shouldn’t have more than one account. This is pretty bad news to all the “SEOs” and other “internet marketing experts”. It is also excellent news and pretty amazing on this forcing-marketing-down-your-throat in this “social” happy place we call the web 2.0. Simply, that means real people and real feeds that try to integrate the real discussion on the web. All those @’s and #’s? What about real discussion with real threading and real topics? What about a renaissance of long-form personal publishing? (If you didn’t follow any of the previous links, please read this. I’m totally with DeWitt Clinton here).

The trick to make all this work and where Friendfeed and Plaxo failed is critical mass. I’m pretty sure that the guys at Facebook are really looking at Friendfeed again and rethinking what parts they should chop off it instead, because if Google can truly pull this off and make this pipe-dream of semantic and social aggregation nirvana that plumbs everything out of what it can get it social graph on work, Facebook has no other option than to open up and that’s pretty much the end game for them right there.

The technical challenge is really complex and it’s going to take some time until all the pieces are in place. Google has put their thing out in the open and it is now the publishers’ turn to do some back-end changes so that this discussion utopia can get its legs. I’m not expecting the social web to turn on its head in a day, but this is some serious stuff for the long term. The reason why I think Google can pull this off is that Google just needs to show ads on the web to make this worthwhile, Facebook et al. need to monetize every inch of their userbase. Google can, and it is in their advantage, to utilize open systems and not lock people in. And, hey, maybe things don’t pan out. Google has the cash to try something else.

Thoughts on the (iTablet) iPad – connectivity, apps, multitasking, integrating with Macs

The following is a draft I wrote prior to the announcement of the iPad, but which I didn’t publish because it was a series of hypotheses based on an as yet non-existing product. It’s a series of thoughts on how an interface of a touchscreen larger than an iPhone might look like. It is inspired by both my experiences with Macs and since recently with an iPod Touch. Here goes.

A couple of thoughts I had last night (written on 13.01.2010) about interfaces, the current state of development for the iPhone OS, how Apple could build a hybrid of Mac and iPhone OS, and how the company could build multi-tasking into its rumoured tablet. My thought were the following:


a. A new category: I don’t think the iTablet, if it exists, will be either a Mac or an iPhone. My super-superficial reason: it doesn’t fit in the Mac line-up depicted on the online Apple Store (see pic), but a more underlying reason is that I don’t see space for it in either a Mac-category or a Mobile phone/media player category. Which is not to say that it won’t do either well, but I think it will more fall into the class of Netbooks, though of course with the purpose of bombing those low-tech, low-innovation devices out of the water… just like Apple did with MP3 players and with Phones. Note from today: as it turns out, the iPad is depicted below the iPod, iPhone, and Mac lines, but time will tell where it will be once it’s on sale. owning an online casino

b. The Keyboard: I think that any 10″ screen will demand more connectivity to secondary (Apple) devices than the iPhone allows for. That means, an external keyboard and mouse, which transforms the tablet into a desktop. I have less complaints about the software-keyboard now, after working with a Touch for a while, but I still don’t see it as an alternative for longer texts, which a larger screen would warrant. Some months ago, I made a stupid mock-up of the iPhone + a keyboard (see pic), which is how I envision it looking (only better).

c. The App Store: 3 Billion Apps downloaded, Apple just reported, which also suggests a kind of lock-in. For better or worse, developers have accepted the App-store and I think it works for several reasons for both, namely more protection from pirates, more predictability for developers when developing for the black hole that is Apple, and more control by Apple, which is what Apple likes, not to mention new income streams for both. I think the App Store will continue to exist and will present new challenges when talking about a larger screen. Note from today: I don’t believe that what we will get to see in less than two months will be that what people were playing around with after the Apple keynote. iPhone apps inflated to a larger screen, come on? 4 pics 1 word

Apple-Dashboard-in-iPad-1d: The User Interface: I’ve written previously about Quick Look in Snow Leopard and how I also dug its slight innovation in terms of in-icon playing of media. Previously, OS also introduced Dashboard into Tiger (I believe), whose interface, on the surface at least, resembles the iPhone. My view is that Apple will give developers the option to just keep the same resolution apps as they have offered before, though not exclusively of course. But imagine “Quick Looking” an app and still having it run inside its “Icon,” while the user does something else. For the rest, I of course think that full-screen Apps will exist, which is where Dashboard comes in, or at least a type of Dashboard. (Note: that was wrong. More below.)

Apple Dashboard in iPad-1.jpge. Integration with the Mac: One of the most underused interfaces, at least on my Mac, is Dashboard, which allows people to have continuously open widgets on anything from news, to games, to radio, to system monitoring. It’s useful for those purposes, but not really something i spend more than a few minutes at a time with. Yet the first thing that came to mind when thinking of a “Tablet,” using both iPhone and Mac interface components, was Dashboard. It creates a new layer on top of a traditional desktop, allowing for user-input and information display. When I envision someone running the apps that would work on the “iTablet” also, I think of it either being that you open up a new layer on your Mac and run the very same apps on it through something like a Dashboard-like interface. Or, and the simplest solution is usually the best, through having the Tablet sync through iTunes with regular applications on the Mac.

Note from today: well, obviously this was wrong, but there have been several theories aired of having a type of Dashboard on the iPad for apps like calculator and weather, which don’t at all make sense to run in single focus on a larger screen than the iPhone.

Further thoughts from today: I do think that we will see a new OS update for both the iPhone and iPad before the release of the iPad. This will address the concerns that people have about it just being a larger iPod Touch. For the rest, to me the only downside to this device is the lack of a front-facing camera for video-calling, and some minor things. And I also think it’s the perfect “parent device!” What the Wii was to gaming, the iPad is to computing, addressing a very very blue ocean.

As previously stated, I’m still in line to get one this year, though only after trying one first.

Seamless Roaming

Wireless LANs are being rolled out in places such as trains, railway stations, airports, coffee houses and garages, creating WLAN hotspots. These are pushing forward the dream of ‘wire-free’ working, but this does bring further challenges, such as integration of WLAN access, network security and other wireless access methods. This is where Brand can help.

Brand’s Apollo solution, matured over 14 years of successful deployment, is making mobile data a reality for business-critical data applications using Seamless Roaming for many carrier and enterprise users throughout the world. It removes the uncertainty of using a wireless network to transfer vital information by transparently integrating GSM, GPRS and 802.11b networking with LAN environments, both within the enterprise, and in the field, and provides automatic recovery from dropped connections without repeating a data transaction and assures that data is never lost, corrupted or compromised.

In a train application, the Brand Communications Seamless Roaming solution automatically manages data devices and aggregated bandwidth from whatever is available to it along a given route. The system can be sending data down all the public operator networks, packet or switched or any combination at the same time and will automatically bring on-line high speed WiMAX, Microwave, PWLAN, 2.5G, 3G or Satellite as they come into range. The system continually monitors the performance, integrity, availability and latency of each data pipe to ensure that optimal use is made of it. The solution can also accommodate asymmetric working to allow use of broadcast based bearers. The system can provide seamless roaming across all the bearers or aggregate them as required to achieve a high-speed service, ensuring continuity of connection without the end user having to re-start their Internet session. The system has also been fully tested with 3G devices incorporates these into the data path as they come into coverage which is more likely to be found in Metropolitan areas in early days of deployment.

If the signal is lost, for instance during travel through a tunnel, the system will recover the data connection when signal has been regained, buffering data so that the mobile user does not have to re-connect to the internet.

The solution also includes a Web Optimiser engine developed by Brand Communications that ensures that heavy graphics are optimised for un-wired connections, reducing the size of web site downloads and so enhancing the user experience.

Brand provides significant protection to the mobile user and the data by providing authentication using AES (Advanced Encryption Standard – 3DES replacement) as the VPN.

Brand’s solutions are deployed in many carrier and Enterprise customers across the world and the company specialises in implementing mission critical wireless solutions and strategies. Brand’s Apollo solutions make mobile data a reality using robust session management. It removes the uncertainty of using a wireless network to transfer vital information and ensures compatibility with almost all data devices and networks resulting in a future proof investment for any organisation.

The solution is equally at home in the hands of a single device consumer, or as an intelligent router managing a train backhaul or other mission critical broadband application. The system can also handle automatic hotspot logon through profiles which can manage the authentication for the user and not require the web based pop up screen. This can also be used to optimise connectivity for the user in cumulative time based billed public hotspots.

Code Server: a brief presentation

Q1. What is a Code Server? 
A1. The term “Code Server” refers to the analogue of a “File Server” for program code.
A File Server arrangement allows you to open a file without the OS knowing the intricacies of how and where the file is stored.  A Code Server arrangement allows you to open a program without the OS knowing where the pieces of the program are, and with only the immediately necessary modules at the client.
Until Code Servers were invented, operating systems had to refer to programs as files.  Programs had to be directly visible as a file that the computer’s storage system could access.  With a Code Server arrangement, the access intrinsics can be reduced to a series of protocol calls which avoid the issues of code management such as installation, versioning, and sharing.  This “client-serverization” of the relationship between an OS and its program files (modules) is at the heart of the Code Server.

Q2. How does a Code Server work? 
A2. The operating system, instead of making requests on files queries a Code Server.
The Code Server: 
a. knows what kind of client and will send different versions depending on the client’s processor, Operating System etc 
b. knows the client’s preferences and will search for particular versions before sending (eg test version, upgrade, encrypted version) the version that the client requires. Software subscriptions can be handled this way, for example. 
c. can tell the client when it thinks that the client already HAS the code in question, thereby saving transmission bandwidth.
The Client: 
d. can make up rules telling the server HOW to search 
e. as long as it checks with the server on each request, is guaranteed the desired version even if an update has taken place since the last transmission.
The core of the technology is related to something called Dynamic Linking, which enables computer programs to be broken up into small pieces (modules) which exist and are maintained independently.  What a Code Server does is to enable the distribution of these pieces while keeping control of the relationships between them, by storing information about those relationships. Our term for this is “association”.

Q3. There must be some client software on the client machine that is able to communicate with Code Servers and do the dynamic linking. Should this be a part of the operating system or a program you can download conventionally?
A3. To work nicely, the client will have a Mini Code Server (Mini-CS) that speaks the same protocol but is somewhat intimate with the dynamic linking mechanism of the operating system. To be clean, every LOAD by the OS needs to be routed through the Code Server. We did this with Windows to prove the concept – and it works.

Q4. Is machine code suitable for dynamic linking or do you need pcode or Java bytecodes or something similar?
A4. Yes.  They had dynamic linking with machine code in the earliest versions of Windows and OS/2, and it remains the basis of most operating systems (DOS is the exception).  We do NOT claim to be inventing dynamic linking, or pcode.  However, dynamic linking of pcode was not done under Microsoft operating systems until our first Code Server!

Q5.  So is the Code Server mechanism applicable to all kinds of processors?
A5.  Yes.  The Code Server idea is suited to any system that uses dynamic linking.  A broader reading of the patent would suggest it is applicable to any operating system that makes requests for modules of a program.

Q6. Doesn’t dynamic linking require a different kind of compiler output (obj)?
A6. No one distributes OBJ right now. But all executables are dynamically linked.  This includes EXE’s and DLL’s, as well as VxD’s.  I am less sure about Unix (Linux) since dynamic linking is relatively new there (see the docs on SPRING).  Current compiler/linker output is designed to resolve API and inter-module calls to some set of slots which are ready to call the outside world.  Then DYNAMIC linking resolves these, usually one module at a time.

Q7.  What do most users do currently to install programs?
A7.  A typically large file is copied or downloaded to the computer and installed.  This frequently involves copying support libraries to some known location, such as the WINDOWS directory. This is advantageous because once the copy is done, access is fast.  The disadvantage is that to update the software, the download generally has to be repeated, and there is no simple mechanism to detect or control which parts truly need replacing, or where they belong on the target machine.

Q8. Why is it disadvantageous to execute remote files?
a. Slow – connecting to a file system requires lots of overhead both at connect and file-access time. 
b. Insecure – you need to take all sorts of steps to protect the remote file system from abuse.

Q9. Won’t installing software over the network be extremely slow (especially for large applications)?
A9. Most internet connections are intolerably slow, and it might take too long to install an entire program over the network.  This would not be a problem with a fast enough connection (e.g. ISDN or DSL), and seeing as there is a big push on the part of ISP’s for faster connections, the issue of network speed might well become secondary.  However, Code Servers also help with the issue of speed – See here for an in depth discussion.

Q10. I assume that if you start an application distributed by a Code Server it begins execution immediately after the main module has been loaded while the loading continues in background. The modules are loaded in the sequence they are required for program execution.
A10. Correct.  As long as all the pieces can be found.  That is, in principle if not in practice, how applications work right now.  It is only necessary to fault into memory the actual modules being executed – just as long as the tables and suchlike for the faulting in of the rest are completed during association, i.e. before we start.

Q11. I imagine a Code Server could be running either on one or more server-machines in a lan or on many internet servers. This could be combined in a proxy-like manner (I am not sure if I know what proxy really means today). On the internet, an application could could be located on several Code Servers – for example the main module on the vendors’ Code Server but the 3d graphics module on the server of the 3d guru.
A11. Right.  But the intent is that all the pieces an application eventually end up on the client machine.  Also, it is unlikely that the client would need to go farther than his local ISP for the required modules.  The issue of distributed processing is entirely different.

Q12. What about data that comes with software, e.g. configuration files, scenarios? Can they be handled with the Code Server too, or do you need to download them conventionally? I am not referring to data that is produced by the user of the application, e.g. the text that is typed in a text processor.
A12. Yes, they could. The patent can probably be interpreted to mean any “recipe” with reference to other recipes as covering this. Of course most such data files lack the complicating property of reference to yet other such code files in a recursive kind of manner, so they are just the simple case of a referenced object. And the infrastructure we propose will handle these files as part of the packages – it will be as if there is a “code” envelope for configurational data. If you know how icons are embedded in Windows executables, you may be able to see a close parallel.

Q13. What is the preferable size for modules that are distributed by a Code Server?
A13. It doesn’t matter – but the better broken up they are, the more seamlessly they can be distributed.  Against that, the more modules there are, the more entries in the module interrelationship diagram there are (more searching, more association time), but we say that the time to associate is small compared to the time to download as long as the network is slow (and it always will be).  Also, Code Servers can store the results (i.e. names, locations, and reference lists) in a database.  Use of large modules is a stupid optimization which works when you have a fast disk. The original system we built this for had a slow network (1 MBps) and (potentially) NO hard disk.  EVery so often someone points out that things load faster with larger modules (see for example this technical note from Microsoft), but with the Code Server approach we are pushing for smaller ones.

Q14. When an application is completely in the client’s permanent storage cache and therefore can be executed offline, how can the client be notified about a new version available?
A14. The responsibility of the Code Server, starting with the Mini-CS on the client, is to check with its immediate superior if there is a change in status of the code file in question. It will make the same request to the Code Server it made before. If the answer it gets is different (eg checksum has changed) then it knows to download the new version. And the Code Server will check with its immediate superior in the same way. Of course when a change is made at the source level (way out there at the Guru’s computer to use our earlier example), there are two ways such a change can be forced all the way to the user
a. The Code Server periodically polls its superior for all “known” modules every so many hours/days/invocations. 
b. The Guru (software maintainer) can send down a flag to the Code Server network to specifically force the clearing of the “ok” flag for that module. Of course even then the download doesn’t actually take place until there is a request for the object.
All this is technology that the “push” network folks (Marimba, Pointcast and others)  must have already worked out.
And of course if there’s no network turned on then no checking is possible. That brings up the point that this technique could be used to install applications off CD’s. No more installation programs, hoorah!

Q15. Code Servers encourage you to easily try out programs offered in the web.  How can you ensure or even make it more probable that the programs do no harm to your computer and the data stored on it?
A15. Easily – the Code server is a perfect way to license, meter or encrypt software. Thus it easily allows a Software Provider (or its agents such as ISP’s) to make rules about who gets what and in what version. This would allow the point-of-sale to be the ISP rather than Microsoft, for example. Wouldn’t that be a nice improvement for everyone? (postscript 14.7.99 the latest jargon for this is “software rental”, see for example this reference)

Q16. Will there be one Code Server standard (at least for PCs), or will there be several Code Server products incompatible with each other? Are you going to enforce standardization using the power obtained from the patent?
A16. The standard will be when the operating system is modified by Microsoft or Torvalds or whoever. It will represent a level of binary compatibility (the protocol) without which things won’t work. Vendors are right now doing a bunch of ad hoc things to distribute (Netscape smart update, Marimba, Ncompass, Microsoft) but they are all tacked on the outside of the O/S just as our original thing was. Once the OS itself does it, the whole OS can be distributed by Code Servers and there wont be an issue.
A pure application product for distributing software by this means will not succeed commercially, I predict. You might be able to sell an application which included it and depended on it for distribution (the killer app for the code server platform). The incremental improvements (eg NT5) in existing technology will encourage people to buy the Microsoft product or some other standard such as Linux. That in turn leaves our only option for making money as legal — unless we can get a competing OS vendor or an application vendor with some clout to embrace the Code Server idea.

Q17.  How did you come up with the name Microtopia?
A17.  Despite the rather obvious connection to Utopia, the name is more subtle in that the greek letter m (mu), which stands for the prefix micro, looks (and sounds!) a lot like a ‘u’ in the Arabic alphabet.  We wanted a name that suggests how ideal is the world of Code Servers. The definitions on the index page mimic the Oxford English Dictionary definitions of Utopia; we put the second one up there because this idea really does sound too good to be true, but we have yet to find a fundamental flaw.  Seriously though, this idea could make the computer world a lot more Utopian, and might even make order out of the chaos of the internet!

Paradigm Shifts Between Phone, Tablet, Desktop & Web Interfaces

…Or how not to approach development. It’s busy in Vincentland, but I’m still determined to regularly update Tech IT Easy. Today, my question is: What determines the choice for a platform? Is it market, personal taste and talent, or the desire to create something that fits a certain paradigm? In the end, no matter how cool or uncool, we’re talking about a technology choice, which is affected by cost (time & financial), the tools available, and the potential return on investment. Just to put it coldly…

I’ll be honest. I have become a big fan of the tablet paradigm. Similar to the Nintendo Wii, it’s a blue ocean that not only addresses the un-targeted space of everyone that doesn’t use computers (from toddlers to old people), it also represents a potential (!) future for computing, away from the constraints of the abstract mouse and the oh-so-square keyboard. It’s a portal into right-brained computing, which I’ve written about several times before. Traditional computing is left-brained, it’s logical and doesn’t allow for the unstructured approach to creativity & thinking that materials like paper does. We’ve long needed a digital equivalent, and it quite possibly is here today (or soon anyway).

Touchscreen-iMacThe biggest obstacle to tablets becoming mainstream is not software, it is cost. You can justify the cost of an (Apple-priced) laptop in a work or school context. It drastically increases your productivity. While Apple has tried to keep the cost of its tablet-line relatively low, there’s no equivalent formula for calculating the return on investment from tablet-computing yet, because the money-making processes aren’t easily carried out via that medium yet. At some point, I envision tablets becoming clients hooked up to a massive server, docking into a pseudo-computer with a keyboard and (something akin to) a mouse. That would require a central computer to act as storage and a well-thought-out dock that is on people’s desks. The reason this doesn’t exist yet, is because no-one’s sure how to interact with the touch-screen when it’s standing up like a display — it’s an ergonomic conundrum.

The bigger problem is simply that having a device with too many faces — touch-interface on the one side, desktop power-horse on the other — creates a confusing paradigm for both users and developers. Would there be software that only works on the tablet-side, or would a software have to be “cross-platform?” It appears to me that this problem is being addressed in Apple’s new operating system Lion, that integrates features from OS X and iOS, but we’ll see if and how it works in practice. In any case, it will be designed to legacy-support the last few generations of Mac-computers, which all use a traditional mouse and keyboard interface. Future versions may be a fabled iMac that is also whole-or-part touch-screen, we’ll see.

Mac OS LionTouchscreen iMac

The difference between phone and tablet is clear: minimal screen-size and processing power (somewhat changing) and maximum portability. Tablets are also portable, but more armchair or go-to-a-café portable than wait-in-line-in-the-supermarkt portable. Not having used a tablet everyday yet (but it’s happening soon), I don’t quite know how this translates to applications. I do expect to use a tablet as a magazine and book reader, and would love to use it as a boardgame replacement with other people(!), both of which are natural to either the armchair or café context. The phone interface naturally lends itself to casual use, whether it’s a 1 minute game or a quick browse through the news or mail. While the iPhone’s retina-display is beautiful, beautiful for reading eBooks, it’s still a nicer experience on a bigger screen or a dedicated eBook reader.

Desktop software is geared towards productivity, both in an office and entertainment context. If you see how some people play StarCraft, you’ll understand that there will never be such a game on a console (though we’ll see about tablets). Equally first-persons-shooters that are released in parallel on desktop and console perform much, much better on the desktop. There’s no beating the mouse and keyboard-combo, whether you’re typing away in Excel or fragging your enemies to little pieces.

Regarding the web, I was fascinated to read the Ars Technica article, entitled “The Strategy Tax.” It refers to the scenario where Microsoft’s Office business unit was competing with the Web devision and was blocking the latter’s ability to innovate. Or so they say, but looking at what’s being on the Web now in terms of Office-alternatives, this is a credible claim. The desktop’s limitation is the lack of sync (something that the Google laptop is trying to address), which affects distribution and security (in the back-up sense). While it supposedly doesn’t yet have the matching horsepower that a Mac Pro or Alienware desktop computer would have, you can clearly switch between both — use the web for streaming and the desktop for processing — very effectively.

To summarise, following are the paradigms that I understand these four platforms to fit into:

Standing in line portable: Phone. Mostly used for quick activities on the road, like checking your todo’s, playing a 1-10 minute game, or browsing some quick news or mails. I see the interface for this being as reaction-fast as possible. We just want to launch it and go.
Armchair portable: Tablet. Mostly used for activities that take at least half an hour and can be done on the couch, e.g. reading or playing a game (I’m purposefully leaving out complex activities like drawing or making music, both of which have both hobby and professional applications. Launch time is important, but there’s more room for multi-tasking and displaying rich information.
Workhorse: PC. The powerful combination of mouse and keyboard, together with other factors contribute to its use for activities that require a lot of productivity. We care more about ability and features here than speed (though no one stops caring about speed).
Connected: Web. We favour the web because it keeps us in connection with stuff that is relevant to the task. That affects things like storage, security (both positively in the sense of backups and negatively in the sense of encryption), and more. Since the interface is used in the context of either a phone, a tablet, or a desktop, we tend to require a fitting interface and functionality from web-apps.

But why do I ask all these questions? In the end it’s a distraction, because I’m the type of person that asks a million questions to be sure before engaging a trajectory. In my case, I use so many Touch-interface apps and hate PCs so much, that I want to try developing (small) apps for that platform as well. But I’m also wondering about the future of these platforms and if developing for them is a safe investment. If you ask me, they are, but the exact shape isn’t clear yet. And it’s up to software developers, more so than hardware-developers, to define how tablet-platforms will be used, by toddlers, the elderly, and my generation—the 25-45 age-group.

EU investigation into roaming prices

The European Union Commission Thursday said it would launch a formal in-depth investigation into roaming prices charged in Germany by Vodafone Group PLC (VOD) and T-Mobile International AG (TMO.YY).

The regulators can impose fines of up to 10% of annual turnover if they find the two mobile phone companies guilty of abusing their dominant positions in the German cell phone market to set excessive prices for roaming, a service provided to cell phone users outside their home country.

In practice the levies are a fraction of the 10% limit.

The Commission said its aim was “to ensure that European consumers are not overcharged when they use their mobile phones on their travels throughout the E.U.”

E.U. officials in Brussels have been investigating T-Mobile and Vodafone since 2001.

Since the 1990s, mobile phone companies have charged higher prices for roaming than for conventional phone calls.

The E.U. Commission has been cracking down. In July it issued a threat to Vodafone and MmO2 PLC (OOM) about roaming rates in the U.K.

The problem, said E.U. spokesman Jonathan Todd, is that these companies have been overbilling foreign phone companies whose customers “roam” in Germany and the U.K.

Todd said the Commission had looked at other countries but “we established that Germany and England had the highest prices”.

E.U. competition commissioner Neelie Kroes had to absent herself from the case because she once served on the board of 02. Commission president Jose Manuel Barroso handled the case. It’s the third time Kroes has had to step down because of a conflict of interest.

Todd said that resolution of the Germany and U.K. investigations were “months away.”

As Dow Jones Newswires reported earlier this week, Vodafone has already promised the E.U. it will cut roaming charges throughout Europe this summer, according to an E.U. official who declined to be named.

When the new charges are applied, Vodafone clients will pay a flat fee for roaming calls, which will be much lower than the current charge of EUR0.89 a minute within the E.U., the official said.

Vodafone spokesman Jens Kuerten declined to comment, only saying that the company will talk about fees when it is ready.

Concerns for investors in mobile telecommunications

Slowing sales growth and increased margin pressure are shaping up to be the major concern for investors in mobile telecommunications equipment makers in 2005.

Market leader Telefon AB LM Ericsson’s (ERICY) fourth quarter 2004 gross margins Thursday came in below expectations in the face of intense competition. The company said the effect was specific to the fourth quarter – an assertion some investors appeared to doubt as the shares were marked down 7.8% to close at SEK20.20.

Market research firm Gartner forecasts the market for mobile network gear, which accounts for the majority of sales, will grow 5% in 2005.

That compares with estimated 10% growth in 2004, which helped major players show good revenue growth and recovering earnings to offset massive losses and falling revenue in previous years. Ericsson’s shares were up 64% in 2004.

Ericsson projects the market growing between 2% and 5% – both Gartner and Ericsson measure in dollars – and Gartner analyst Jason Chapman said he sees industry margins coming under pressure short term as there will still be a fairly high proportion of hardware sales during at least 2005.

Hardware sales typically attract lower margins than software. Competition for new contracts is fierce and vendors are forced to accept wafer-thin margins, hoping to increase them over time.

“Later, margins should be helped by a growing share of software as upgrades to third generation equipment show up in sales,” Chapman added.

Currently many new networks are being rolled out generating sales of hardware such as base stations and other network equipment. In emerging markets such as Russia, India and Brazil it’s primarily second generation networks using Global Systems for Mobile communications technology that is driving sales. In Europe, third generation network rollouts are now taking place on a large scale.

In connection with the fourth quarter earnings releases Alcatel SA (ALA), Nokia Corp. (NOK) and Ericsson all said that margins have been or will likely be negatively affected by low prices on some new orders the companies have signed for rolling out those networks.

Analysts are worried about the implications of these signs of margins weakness.

“We remain concerned that margins, both gross and earnings before interest and taxes, will remain under pressure through 2005,” said CSFB in a comment to Ericsson’s fourth-quarter earnings.

The bank sees Ericsson’s operating margin coming down to 20% in 2005 from 22% in 2004.

CSFB added that the mobile systems market remains one of the most competitive global industries.

Number two player Nokia said after its fourth-quarter earnings release on Jan. 27 that its targeted 14% operating margin in the infrastructure business will likely not be met short term, also blaming new network rollouts.

Ericsson, Nokia, Motorola Inc. (MOT), Alcatel and Lucent Technologies Inc (LU) showed a combined 14.5% sales growth from its mobile infrastructure and services businesses in 2004 measured in euros, according to calculations by Dow Jones Newswires.

Measured in dollars that growth was around 26%, implying that the major vendors gained market share at the expense of smaller ones.

Ahead, sales growth is seen being higher for services related to mobile xxx infrastructure than for the gear itself. Ericsson said it sees the services market growing 10% in 2005.

Siemens AG (SI) no longer discloses figures for the mobile networks unit and Nortel Networks Ltd (NT) has yet to disclose its full year 2004 earnings due to accounting problems.

Watch TV While on the Run

New services let you watch TV on your mobile phone, but it may be tough on your eyes.
Tired of gabbing, writing text messages, and playing games on your mobile phone to kill time? Try watching some TV.

I’ve done just that for the past two weeks. And the experience has been, well, entertaining, to be honest.
Let me say from the start that I’m not a big TV viewer, but when I view, it’s generally in one of three areas: news, sports, and movies. All three are available in the Live portal of German network operator Vodafone D2, one of Europe’s first operators to offer a mobile TV service.

The mobile TV offering, designed especially for higher-speed 3G (third-generation) phones, currently offers a series of rotating shows, a sports channel that gives a roundup of the Saturday games shortly after they’ve finished, a news channel updated four times a day, and full-length movies. Live news or sports coverage is not available today but is technically possible and likely to be offered in the not-too-distant future, according to Vodafone D2.

Put to the Test
Because mobile TV is about viewing on the go, I checked out the service in a car, train, and streetcar. Reception in all three was fine as long as I remained in a 3G cell–which meant that when I recently boarded a train in Frankfurt, I lost reception shortly outside of the city. That’s a shame because I think many train commuters (and Germany has plenty of them) would readily use the mobile TV service.

Third-generation coverage is still spotty because most operators in Europe, including Vodafone D2, are initially concentrating on big cities. However, coverage will be gradually extended to major roadways and railways, and later to smaller communities.

The streetcar test was interesting. I found nearly everyone standing near me in the crowded compartment trying to get a peek of what I was watching: the N24 news channel.

The highway test was a lesson in itself. I let my young boys, who are under 12, view some programs on the phone–Motorola’s sleek E1000–while I sped around the autobahn near Dusseldorf. They went wild. So, there’s a toy for kids on long trips, I guess, but not only: drivers caught in hours-long traffic jams (not uncommon in Germany) should be interested, too.

Mobile TV, of course, isn’t just for people physically on the move; it’s also intended for those waiting for a train or taking a coffee break or even sitting in a cab waiting for a passenger. I asked a cab driver what he thought about the service after I tested it during a ride. “Nice,” he said. “Really nice, but I doubt I can afford it.”

Paying the Price
For sure, pricing will be crucial to the success of mobile TV. Although the service is currently free as a promotion, in April Vodafone D2 will begin charging about $4 per hour after the first two free hours as part of the monthly subscription fee, which, depending on the volume of minutes, ranges from $26 to $125. However, more differentiated fees, such as pay per view, pay per time, pay per news, or sporting event, are in the making.

For kids, those fees can add up quickly but they’re not the primary target group, according to Vodafone D2, which sees the most potential in 20- to 30-year olds, as well as businesspeople in that age bracket and older.