Pip

Slitherine Team Blogs. Written by the team to give you a behind the scenes picture.

Moderator: Slitherine Core

Post Reply
pipfromslitherine
Site Admin
Site Admin
Posts: 9702
Joined: Wed Mar 23, 2005 10:35 pm

Pip

Post by pipfromslitherine »

<p>Well, 1st blog entry. Hopefully there might be some interest in the behind the scenes goings on at the code face as we mine the Slitherine games.</p>
<p>Just back from our latest company meeting. These are always a good way to get everyone synced up with the details of the business, and also drill down into game design issues and features.</p>
<p>But it's back to multiplayer coding for me - jetlag and all. GBotMA uses a slightly different approach which in theory makes multiplayer a little less tricky, but it compensates for this with a more complex lobby (to support our user campaigns, which hopefully people will enjoy creating!) and slightly more tricky logic when players go to enter their orders.</p>
<p>We are using UDP for all our networking, as this is both faster (as it avoids the overhead of TCP stacks and the like, especially as we mix guaranteed and non-guaranteed messages) and also is more easily ported to other systems. We do have our own guaranteed system, but this is much more under our control than a TCP connection would be.</p>
<p>The biggest issue with multiplayer is making sure that you can allow both machines to run smoothly, but without running past the point where an order or system event should have been executed - the big problem here being that you need to make sure they <em>know</em> that they might have missed an order message if it failed to get through! Basically we handle this by including in every tick message the last order message that should have been recieved. If an order message has been missed, then you have no option but to ignore incoming tick messages until you get the order (ignoring incoming ticks will stall you - which in turn will stall the other player as they aren't seeing your tick count increasing. Thus you will both stall until the guaranteed messaging system manages to get the message through and everything can then start up again).</p>
<p>Feel free to comment on the kind of stuff you would like to see in my blog - it will tend to be programming based, but there are always general rants available on request!</p>
Redpossum
Brigadier-General - 8.8 cm Pak 43/41
Brigadier-General - 8.8 cm Pak 43/41
Posts: 1813
Joined: Thu Jun 23, 2005 12:09 am
Location: Buenos Aires, Argentina
Contact:

Post by Redpossum »

Fascinating stuff, Pip.

For years I was a dedicated FPS gamer, and I do mean dedicated as in -

3-4 hours a day of unorganised play on pubic servers

2-3 hour practice with the league team, three nights a week

1-hour League game on Sunday

30 minute team meeting after the game to review.

OK, where am I going with this? Well, the quality of the game experience in an FPS game varies inversely with the amount of lag. Partly this is determined by latency, but in large part it has to do with how the net code is written.

The first Half-Life engine was a good example of very poor netcode. It was designed as a single-player game from the start, multi-player was added in virtually at the last minute when the marketing department declared it a necessity, and the online play was always laggy as hell because of this, regardless of latency.

The Quake III engine was a good example of the other extreme. It was designed to be multi-player from its very inception, and online play was always relatively smooth unless the connection was truly hosed, or the server hopelessly bogged down by other processes.

Now, the game you're talking about here is, if I understand correctly, going to work something like Legion Arena in terms of the actual on-the-field play. Thus it would be classed as real-time, at least in general terms.

So, how does your challenge in writing the netcode for this new game compare to that faced by the FPS game coder? Is it essentially the same? Do you have different priorities?

The other thing I have noted and wondered about is the cable vs DSL issue. Now this may be different in europe; I have heard we use different DSL protocols or architectures.

In the US it seems as if cable users get the lowest ping, but higher packet loss. DSL users get higher ping times, but lower packet loss.

So how does packet loss figure in?
adherbal
The Artistocrats
The Artistocrats
Posts: 3900
Joined: Fri Jun 24, 2005 6:42 pm
Location: Belgium

Post by adherbal »

RTS and FPS net code are very different because the first works with a server that plays the only "real time" game, while the client PCs receive data from that server (player's coordinates, orientation, speed, weapons type, ...) to determine the current game status - which always varies from the server's game due to lag. In RTS ALL PCs have to run the exact same game (synchronized) because all PCs send data about the players' orders ("mouse clicks") and not the coordinates of soldiers etc. If one PC misses a data packet and can not recover it will divert from the other PC's and the game will play out entirely different ("out of sync"). Keeping all PCs in sync is the hard part of RTS net coding.
pipfromslitherine
Site Admin
Site Admin
Posts: 9702
Joined: Wed Mar 23, 2005 10:35 pm

Post by pipfromslitherine »

In theory you could write an RTS in the same server-centric way, but the number of objects would mean that you would burn a lot of bandwidth doing it that way. So if (say) you only supported LAN play, then it might be practical, if it removed the considerable pain of fixing all your out of sync bugs.

FPS games tend to use a lot of prediction and other tricks to hide the latency, knowing that 'real' outcomes can get fixed by the server later down the road.

In terms of lag, when it comes to the code using a synced approach, the game will need to dynamically measure the trip times and adapt to give the best compromise between lag (the delay between clicking and the unit moving) and any pausing because the game isn't sure it is safe to run the model (because one of the other players might issue an order which occurs in the past).
Redpossum
Brigadier-General - 8.8 cm Pak 43/41
Brigadier-General - 8.8 cm Pak 43/41
Posts: 1813
Joined: Thu Jun 23, 2005 12:09 am
Location: Buenos Aires, Argentina
Contact:

Post by Redpossum »

Hah, cool answers! After I made that post, I was sort of wondering if it was a dumb question. Judging from the responses, it was not :)

adherbal, do you know, I had totally overlooked the fact that FPS games are a client-server setup, while RTS games are usually peer-to-peer.

Now I write that, though, it occurs to me that not all are like that. Many Sierra games of the 90's and early 00's had online play through WON, Homeworld for example.

But I don't know if that was a true game-server setup. There was absolutely no provision for any multiplayer without WON, which argues that it was truly a server, sort of. Or maybe just Sierra being corporate control-freaks.

OTOH, maybe it was just functioning as a Lobby. Hmmm, I just remembered something else. When you played Homeworld (HW1 hereafter) online, there was provision for up to 8 players. If you had some AI sides in addition to the human players, you got a warning at game start that if a player dropped out of the game, a few of the AI sides would poof as well. And they explained that this was because the AI sides were actually run on the client computers.

Now, at first blush this would seem to clinch things in favor of a peer-to-peer model. But could it simply have been a case of distributed processing?
Post Reply

Return to “Blogs”