Jump to content
Compatible Support Forums
Sign in to follow this  
SnowHawk

PCI latency?

Recommended Posts

Do you think PCI latency would affect anything?

SiSoft Sandra reports it as too high for 2 components

and it says my ATI All-in-wonder Radeon is "an early version of hardware may contain bugs"

 

the 2 components are a network card and the onboard HPT366 controller

 

It is a BP6 (440BX chipset)

DualCelery 366@523 (96Mhz BUS)

128MB PC125

128mb PC133

2 network cards

Sb LIve! X-gamer

Pioneer 10X DVD

Ricoh 8x8x32x

ATI All-in-wonder Radeon 32mb DDR

 

The latency of the network card is 128

and the Latency of the HPT366 is 248

They share an IRQ.

 

The Radeon has a PCI latency of 255... <---- Very odd

And the PCI to AGP bridge has a PCI latency of 64

 

All other components are at 32

 

Could this be causing the random freezing in OpenGL and D3d i'm getting?

 

I used to have the "snap, crackle, pop" with my SB Live! but now with updated drivers (3300a) It doesn't do it anymore...

Share this post


Link to post
Quote:
The latency of the network card is 128
and the Latency of the HPT366 is 248
They share an IRQ.

The Radeon has a PCI latency of 255... <---- Very odd
And the PCI to AGP bridge has a PCI latency of 64 cycles
All other components are at 32

Could this be causing the random freezing in OpenGL and D3d i'm getting?

I used to have the "snap, crackle, pop" with my SB Live! but now with updated drivers (3300a) It doesn't do it anymore...

"PCI to AGP bridge has a PCI latency of 64" this is the master AGP latency value, "The Radeon has a PCI latency of 255" meant the Radeon will try using 255 at all times if possible, but the master AGP latency timer will terminate it at 64, it will have to issue another IRQ demand for another 64 cycles if required still more data transfer.

All others are usually 32bit, 33MHz type PCI devices, their 32 cycles settings consuming equivalent of 64 cycles for 66MHz AGP device.

PCI devices have a master PCI to PCI latency value, which will terminate all PCI devices when the master latency value terminate, no matter what each individual device's value is at.

All these values affect bus data throughput efficiency, too low cause poor efficiency, too high cause lock-ups from overly high latency, late response time from an IRQ.

Latency value determine how long, how many cycles a bus-master device in turns can take control of the system, locking out all other devices for a time period. All bus-master devices required an IRQ for demand service at their IRQ priority level.

This is the fundamental foundation for all X86 architecture, if your CPU has 3.2GB/s bandwidth, at the time a 32bit, 33MHz PCI sound card takes system control, your actual system bandwidth will be determined by the sound card, which is ~130MB/s for this service time period. For an AGP video card, it's 520MB/s.

Have you noticed when copying a floppy disk, your computer acting like its came to almost a complete halt? Your floppy disk controller is a bus-master device.


BTW, all the above no longer hold true for nFORCE, OK? smile

Share this post


Link to post

Do you think the Latency of the Network card and the Highpoint controller could be causing lockups?

Share this post


Link to post

That's the problem SnowHawk, any and/or each one of them can cause the problem. Traditional X86 architechture is very inefficient, everything dependant on "grants" to bus-master devices, one device has a problem then all of them do, one device affects the response, throughput of all others including your CPU, a simplified example...

 

There're 16 IRQ with ordered priorities, assuming worst case response time for 16 IRQs, each with equivalent lock-out service time of 64 cycles at 66MHz, ~1ms per IRQ... 16ms worstcase response to an IRQ.

 

Many things are time sensitive (latency sensitive) such as sound, CD burners became very, very touchy. 32bit device at 33MHz and 32 cycles transfer 3KB per IRQ sevice request. How many seconds worth of sound data is that? Can it be sufficient enough not causing data starvation for 16ms? Which is the source of "snaps", crackles", and "pops", as for CD burners when that happened you get frizbees. Attemp to improve data transfer efficiency by increasing latency values caused worst response time... IE, lock-ups.

 

Remember if you ever needed realtime, latency sensitive hardware or applications, the likes of realtime sound editing, or realtime video editing, or even interactive games... Do yourself a favor and buy a SIS735 or nFORCE motherboard, or for Intel machines get yourself 64bit PCI cards, to improve worstcase data transfer, which in all cases usually are PCI devices. For true high performance, you need to OC'ed the hell out of your PCI bus whenever possible.

 

BTW, avoid ISA cards like the plague it is. ISA bus-master device is performance damnation.

Share this post


Link to post

There is a bug with Sandra that misreports bus speed and multiplier settings. Don't trust it too much.

Share this post


Link to post

Hmm...I only get that thing about PCI latency with my duallie rig, not my A7V though, and both are Via chipsets.

 

The SiS735 and the nForce 420 are good options, and the nForce 220 might be okay, but the first two should be good once they hit the market. I would wait for reviews of the nForce first, but the preliminary results for the Sis735 kick arse, in some cases beating the AMD760!!!

 

Fortunately, I don't think these solutions will be utilizing an ISA slot. ISA is toast now, but if it takes this long to kill off a standard, we're going to see a long and slow death for a ton of stuff.

Share this post


Link to post
Quote:
Hmm...I only get that thing about PCI latency with my duallie rig, not my A7V though, and both are Via chipsets.

That my friend is because the extra CPU required an extra permanent load of system bandwidth, IE longer average IRQ response time than single CPU system.

The funny thing most experts never realized --> extreme care, highly optimized memory management of various OSs are HACKS, due to inefficient underlay X86 hardware architecture, only one thing at a time in turns takes system control needing huge/deep stacks, or even large swap-out operations sucking up more of needed system bandwidth.

SIS735 or nFORCE motherboard support Distributed Processing hardware, allowing point to point simultaneous memory access to all devices at the same time. This not only leveraging all available system bandwidth, but also doing away with most needs to swap-out whole processes/tasks reducing even more wasted loading of system bandwidth. smile

The bandwith efficiency for Distributed Processing hardware depending purely on how much, how far the motherboard, various other hardware components and the OS will go with their implemented supports.

Share this post


Link to post

This is a problem with my geforce 3 and Geforce 2 Ti on my pos PCI 16 Sound blaster card is their any possible way to fix this our would I have to buy a sound blaster live value to fix this problem.

Share this post


Link to post

Overclock your PCI bus to increase data throughput.

 

Adjust PCI latency values within BIOS setting, the numerical increase/decrease should be in steps of 16, as that is the minimum cycles for servicing 16 IRQs.

 

Adjusting BIOS latency value is very much similar to MTU values for network connection. Higher values increase data throughput from less overhead at a slight increase of latency response per connection.

 

Other methods within BIOS --> Disable all un-used IRQs to reduce IRQ service cycle time, remove any and all wait time settings for IRQ services, your system is only as fast as the slowest device having control of your system.

 

Do not disable ACPI if possible, as window will assign un-used IRQ resources dynamically to devices in need.

Share this post


Link to post

Yeah I see what you mean but when you say PCI Bus do you mean Front Side Bus because overclocking the FSB would overclock the PCI and everything else? I don't tottaly understand that I have never had to mess with the PCI Bus before so help me out if you could. Oh and I am not going to mess with anything because this sound card has no driver support at all in Windows Xp so I am going to wait until I get my Sound Blaster Live Value which I have ordered thanks for your help though smile

Share this post


Link to post

Very much so, when you OC'ed the FSB, the PCI bus get the same speed up until multiples of PCI divider is reached, at which point PCI bus clock reverting back to 33Mhz.

 

Main goal is holding PCI bus to 5% up to 10% overclock, in which case won't affect any newer IDE drive devices in use, above 10% very few IDE drives can cut the mustard for long term use.

 

If you've SCSI drives the overcloking range can be much greater, as newer SCSI devices are using internal clocks rather than the reference clock from the PCI bus. Most internal onboard controllers are also affected by the FSB clock.

 

Reason why nFORCE architecture do not suffer the same problem

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×