Gaming Gran Turismo 5

The downloadable demo? Must say it played much better than Forza - but I was hoping a bit more from the graphics.

The time trial demo? Yes, the graphics were a downgrade from Prologue even, it was a demo (it was more of a competition wasn't it?) of the new driving physics model that they have developed. I suppose that they simply plugged it into some old Prologue graphics engine code.
 
I hope we're still going to arrange a RedCafe Tournament. We could all chip £20 into a pot (literally) and the winner could take all. Or split it into 1st, 2nd, and 3rd.

When the feck is it being released anyhow?

As far as I know, GT5 will allow for you to set up driving clubs so to speak, so you should be able to do that.

If I had to hazard a guess, this will be an Autumn release, say mid-late October?
 
As far as I know, GT5 will allow for you to set up driving clubs so to speak, so you should be able to do that.

If I had to hazard a guess, this will be an Autumn release, say mid-late October?

That sounds good, hopefully it will take of better than WipEout did.

Aye, I can't see them releasing a game this big in the summer so that sounds about right. Unless they put it off until Christmas and cut the price of the console again.

I'm guessing we will hear more at E3 maybe.
 
That sounds good, hopefully it will take of better than WipEout did.

Stemmy and I had a good few goes with Wipeout, you said that you didn't have a net connection at the time.

They could release GT5 at any time and it will sell, but autumn makes more sense with bundles then for Christmas. That allows them to shift as many units as possible at full price, and then lots of hardware when people go buying the hardware. There is a new hardware revision incoming.
 
6hq2k3.jpg


dlj41d.png


2v7vvqe.png



Weastey, are they actual in game graphics. . . and the top one(I think) is that one a picture of the actual race and not a replay?

Now, if the demo/time trial download was prologue quality, have you any idea how much the graphics will improve on release? The physics were top notch, no doubt. . .if they can get the graphics to match then it's a must buy for me - and I don't really care for car games.
 
All the media they release is at photo-mode quality, so 20mpx super-sampled down, so none of it is realtime 60fps, no. Is it all created by the game, yes! They could probably produce that level in realtime if they hooked up 16 PS3s together. All it really shows is that the engine is capable of that given enough processing power.

It's not really possible to do that @ 1080p 60fps in realtime at a console price point, not even if you have a beast of a PC.
 
perhaps the next generation of consoles? It's a shame really, I've not really been that impressed with any of the graphics on the current consoles.
 
At the end of the day it's all down to how many transistors you can throw at the problem. This generation started with the chips on a 90nm. PS3 now has then at 45nm, so you can pack twice as many on the die as you could back then. Intel is trying to move towards 22nm, and I suppose by the end of this generation the consoles will be on that node, but you can't really go any further, because the gaps between the gates are becoming a couple of atoms wide (22 nanometer - Wikipedia, the free encyclopedia). The other major problem is memory bandwidth, it's continuously getting slower and slower in relation to the chip speeds (I know that Sony is messing around with optical buses to try to solve this) - which is why the Cell on PS3 has each SPU with its local store as close as possible and the XB360 GPU uses eDRAM. Top of the range PC GPUs are not really feasible in the console space, because of cost, size, and power - they are huge things.
 
At the end of the day it's all down to how many transistors you can throw at the problem. This generation started with the chips on a 90nm. PS3 now has then at 45nm, so you can pack twice as many on the die as you could back then. Intel is trying to move towards 22nm, and I suppose by the end of this generation the consoles will be on that node, but you can't really go any further, because the gaps between the gates are becoming a couple of atoms wide (22 nanometer - Wikipedia, the free encyclopedia). The other major problem is memory bandwidth, it's continuously getting slower and slower in relation to the chip speeds (I know that Sony is messing around with optical buses to try to solve this) - which is why the Cell on PS3 has each SPU with its local store as close as possible and the XB360 GPU uses eDRAM. Top of the range PC GPUs are not really feasible in the console space, because of cost, size, and power - they are huge things.


4c116be7.gif


;)
 

We are getting way off topic, but....

It's a general and rather serious problem. Chip makers have always relied on smaller nodes to cut cost and increase processing power, but we are about to hit a brick wall in terms of the technology currently used. They'll have to do things differently.

I'll have to post the article here, because you need to be a registered member of the website to view it.

Moore to come?

For 50 years it has been the guiding principle of the digital revolution. Jon Excell asks if the end is now in sight for Moore’s Law

It was 1965. The first computers based on integrated circuits had recently been launched. And Gordon Moore, a young executive in the fledgling semiconductor business, was asked by Electronics Magazine to jot down his thoughts on the future of the silicon chip.

Fifty years on and the Intel founder’s observations — neatly repackaged as Moore’s Law — have become shorthand for technological change. His prediction that the number of transistors on a chip will double every two years, has never stopped reflecting the pace of change in a business that has gone from start-up to a $200bn (£114bn) a year industry in half a century.

It is an astonishing record — which other industry churns out products that double in performance, halve in size and double in efficiency once every two years? Just imagine if the car industry could make a similar boast.

But today the computer business stands at a crossroads. As engineers and scientists strive for the breakthroughs that will keep Moore’s observations intact, they are pushing the technology to a point where its key components will become so tiny that they consist of just a few atoms. And, as anyone with half an eye on the quantum world will know, when devices get this small, strange things begin to happen.

So how long has the industry got before the laws of physics step in and prevent traditional technology from taking another step?

According to Prof Erol Gelenbe, Imperial College Londonelectronics engineer, Moore’s Law will continue to hold in the short term, thanks to the type of breakthroughs in materials science, fabrication technology and chip design that have kept it going thus far.

He said: ‘Fabrication processes have become more and more accurate with much better materials. As the materials improve and become purer there are fewer possibilities of errors during fabrication.

‘And as things become more accurate you can become much smaller, you can do much finer etching because you are etching on a better material. Plus, the computer-aided design process of electronic circuits has also improved dramatically. There are far more accurate models and also computer technology allows us to simulate much larger models. Interestingly the modelling and computer simulation itself depends upon Moore’s Law, with more and more powerful computers enabling scientists to simulate larger circuits.’

Although these improvements are incremental they are no less mind-boggling. Last year Intel launched a new generation of chips featuring technology so tiny and efficient that it was hailed by Moore as the biggest transistor advance in 40 years.

These so-called high-k metal gate chips, made using a new 45nm lithography process (this refers to half the distance between identical components on a chip) have nearly twice the transistor density of previous chips built on the company’s earlier 65nm technology.

Intel’s silicon modulator under test, above left, and in close up, centre. Above right, the company’s single modulator chip

While the world’s first transistor radio contained just four transistors, the 1cm2 chips built with the 45nm technology can contain up to 820 million, each measuring about 100 atoms across, and featuring gates that are about five atoms wide.

Indeed, the shrinking size of gate dielectrics, critical elements of transistors, has become a particular problem in recent years, with gaps of just a few atoms allowing current to leak from the transistors and reducing the effectiveness of chips.

Nick Knupffer, Intel spokesman, said the new generation of chips counters this with a highly ductile chemical element called Hafnium that has been used to reduce leakage and enable a smaller, more energy-efficient and faster chip.

But impressive as the new chip is, things move fast in Silicon Valley and Intel is already applying the finishing touches to a chip that will be produced next year using a 32nm process. While the current chips are produced using dry lithography — in which there is an air gap between the lens and the surface of the wafer — these new chips will be made using the increasingly popular technique of immersion lithography, which uses a layer of water between the lens and the surface to boosts the resolution of the process.

Beyond this, Intel is aiming for 22nm by 2011, and even hopes to develop an 11nm process by 2015, but these will require some highly unusual new technologies. Knupffer said: ‘Moving past 32 we’re looking at a host of more exotic materials and techniques, such as three-dimensional transistors, nanowire (5nm wide silicon wire), carbon nanotubes, and the 3D stacking of chips.’

One particularly exotic material under the spotlight is indium antimonide (InSb), a material at the heart of a joint Intel-Qinetiq project to develop so-called quantum well transistors. It is thought that such devices, which confine the movement of particles to two dimensions, could effectively help neutralise the strange quantum phenomena that will occur as transistors shrink further.

Knupffer added that Intel researchers are also excited at the possibilities offered by ‘silicon photonics’, a technology that could replace wires with a tiny silicon version of fibreoptics. ‘Currently data travels on a processor and in between processors using copper wires. Fibreoptic communications is currently only used for big, long-distance applications but we’re looking at making it a chip-to-chip product.’

Engineers working on this project have already developed a range of devices, including a hybrid silicon laser, light guides (waveguides in silicon that can direct light, as in an optical fibre), and silicon modulators that can encourage light into stops and starts.

‘We’ve got all we need to create silicon photonic interconnects between chips on the motherboard,’ said Knupffer, who believes the technique could revolutionise the way devices are made. ‘On a motherboard you have the RAM right next to the processors, because copper tracers can’t be too far away, but that issue doesn’t exist with light. You could have all your memory in a different rack or part of the building, and this could have a massive impact on server design.’ Intel is developing the technology into a product that should emerge in the next 12 months, he added.

The Raman laser is based on Intel’s silicon photonics technology (above left), and IBM’s approach to cooling is to use microscopic rivers of water

Intel is not the only company in the chip-shrinking business. Earlier this year, IBM also announced the development of a 45nm high-k metal gate chip and, like Intel, is investigating other methods of keeping Moore’s Law afloat.

One technology thought to hold promise for near-term improvements is the development of 3D chips that enable transistors to be packed more tightly together. IBM is working on the development of 3D stacking technology, which replaces traditional metal wires with so-called ‘through-silicon vias’, vertical connections etched through the silicon wafer and filled with metal. It allows chips and memory devices that traditionally sit side by side on a silicon wafer to be stacked on top of one another, reducing the size of the chip package and boosting the speed at which data flows among the functions on the chip.

IBM claims the technique reduces the distance that information on a chip travels by a factor of 1,000. The company is running chips using technology in its manufacturing line, and plans to enter production with them this year.

Initial applications are expected to be in wireless communications chips, although IBM is also reported to be converting the chip that powers its Blue Gene supercomputer into a 3D stacked chip.

This approach is not without its problems, however. Stacking chips on top of each other makes it harder to get the heat out. So in parallel with its stacking project, IBM has joined researchers at Germany’s Fraunhofer Institute to develop a cooling technique that uses a network of 50-micron pipes to circulate rivers of water between each layer in the stack.

It is all highly impressive, but as Imperial’s Gelenbe explained, it is designed to eke out a core technology that is approaching a more fundamental physical obstacle than any in its history. ‘As we make smaller and smaller circuits that are more and more densely packed, the number of atoms that you use per component is being reduced. As you go down there is a fundamental physical process which changes: your circuits are not deterministic any more, they are probabilistic. They become random, and as they become random they become less reliable.’

Gelenbe believes this atomic hurdle is already beginning to impact on the semiconductor industry’s roadmap. ‘If you look at the industry conference themes, there’s been a shift toward probabilistic circuits.

‘The question is, despite the fact that these individual circuits are less predictable, can we still build systems that are predictable and reliable using them?’

And according to Gelenbe, there is at least one good piece of existing evidence that probabilistic computing works and works well: the human brain.

‘You and I are highly unreliable organisms, our brains make mistakes but we get most of it right — just think of the amount of computing we do as we walk down the street and we do it reliably, despite the fact that all of our underlying mechanisms are unreliable.’

Intel’s Knupffer agrees with Gelenbe’s analysis, but declined to make a Moore-style prediction about what the future might hold.

‘We are reaching the limits of the CMOS [complementary metal-oxide- semiconductor] process. We haven’t announced anything beyond 11nm — future options could include spintronics (a technique that exploits the spin of electrons), optical or photonic computing, or quantum computing. Work is ongoing in all these areas, it could be none of them or it could be all of them. The one thing we are saying is that we intend to use silicon as a building block.’

And though the challenges facing the semiconductor industry are huge, its optimism in a future that it admits it cannot predict might just keep Moore’s Law afloat. Knuppfer said: ‘Every time we build a fab, it’s a $3bn or $4bn investment. We’re taking a giant step, because we’re building a fab that will create processors that haven’t been designed yet on a process that hasn’t been invented yet for a market that doesn’t exist yet.

‘It’s a pretty big bet.’

Moore to come? | In-depth | The Engineer
 
It will be released this year. People keep talking about delays, but there was only ever one release date stated, and that was this March for Japan, which was cancelled. Just keep the faith, believe! ;)

2e4hrt4.gif


awcah1.gif
 
Polyphony digital boss kazunori yamauchi has claimed that gran turismo 5 will be "totally different" to gt5: Prologue.

Fans have been waiting over two years since prologue hit ps3 for a full release of gt5 - but yamauchi has reassured them that the full game's "scale and diversity" is far superior.

"prologue is really only a prologue," he told cvg sister mag psm3 this month.

"though the look and feel of prologue and the full version will have uniformity, the full version will be totally different in terms of scale and diversity in comparison."

yamauchi said earlier this year that gt5 was '90 per cent' complete.

His new comments come as part of a eight-page racing preview spectacular in may's psm3, which also features f12010, modnation racers, blur, split/second and more.

cvg
 
AutoBlog UK: Can you explain more about the difference between premium and standard cars in the new GT5?

Yamauchi: Standard models won’t have an interior view, less detailing and no crash model

1,000 vehicles
170 Premium models (full interior modeling, the interior corresponds to vehicle damage)
830 kinds of standard model


wtf? only 170 with interior view?
 
See this doesn't fit in with Kaz's pervious claims so most GT geeks seem to think its a communication error. They already have nearly 80 odd cars with interior views already, so too take an extra 3 years and have only a 100 is ridiculous.

My bet is he meant only 170 with damage shown from the interior.



But needless to say, they really need to hurry the feck up and launch this cnut because alot of people are getting more and more pissed off at being shirked around for 3 years.
 
I'm so used to driving with first-person view (without watching the car) that I doubt I'll be using interior view that much. Maybe on the replays, I usually watch them fully while I smoke after a good race.
 
My bet is he meant only 170 with damage shown from the interior.

That would be my take on it.

But needless to say, they really need to hurry the feck up and launch this cnut because alot of people are getting more and more pissed off at being shirked around for 3 years.

You'll get it in late October probably.
 
I was holding out, I've finally got prologue on the cheap, but I'm starting to get a bit fecked off with the delay. This game was frankly part of the reason I wanted a PS3, and it's not like Xbox don't have a cracking driving game with Forza. This is turning into a mild farce
 
There has only been one delay, because there has only ever been one release date ever given, and that was mid-March for Japan. One likely explanation is that Sony have asked them to put 3D support in.
 
I was thinking along the same lines, but I think June might be too soon.

I think it was delayed the first time for two reasons. Reason 1. Sony had several big name games being released, and the biggest of them all 'GT' would only dilute sales for the others who probably need more of the market share then GT. Release in June with the release of 3d tech and Sony have a double whammy to help move stock. Reason 2 being 3D integration. I think GT without it has been finished for four or five months. Its now all about making sure that the game can run 3D seamlessly at 60fps and fine tuning the online play.
 
It's not going to do 3D 60fps in 1080p (well it's 1280x1080), there's not a chance, it would be like running 2D at 120fps. I suppose, if they can optimise it enough, then they could do it in 720p. If it's 30fps, then.....

Anyway, here's another example of how a different game went about doing it. Whether there is as much range for optimisation with GT5 I'm not sure.

Super Stardust 3D: 720p120 confirmed | DigitalFoundry
 
It's not going to do 3D 60fps in 1080p (well it's 1280x1080), there's not a chance, it would be like running 2D at 120fps. I suppose, if they can optimise it enough, then they could do it in 720p. If it's 30fps, then.....

Anyway, here's another example of how a different game went about doing it. Whether there is as much range for optimisation with GT5 I'm not sure.

Super Stardust 3D: 720p120 confirmed | DigitalFoundry

Well apparently it will run at 60fps on 720p but with a slight downgrade in graphical enhancement 'similar to that seen in the Demo at the start of the year' in 3d and 60fps at 1080p in 2d mode 'same as prologue'.... According to GTPlanet
 
I think the point is that when it's moving, you'll never notice it, so it's a waste of time.

that's true but it would be better if such little things are spot on, for replays sake at least, and some overview cameras (probably replay too)
 
that's true but it would be better if such little things are spot on, for replays sake at least, and some overview cameras (probably replay too)

Well, yes, I suppose that it depends on from where you are looking at it.

2q04wm1.jpg


When you go aerial, they are clearly cut-out 2D sprites rather than fully modelled 3D trees.

nord004.jpg


There must be a reason for it, either it's for performance issues or it's because they think it doesn't matter. Probably a mix of the two - they are trying 1280x1080 at 60 frames per second here, replay mode could easily end up 1920x1080 at 30 fps, we'll have to see.