Folding@Home (also known as F@H or FAH) is a distributed computing project run by Stanford University which studies protein folding, misfolding, aggregation, and related diseases. Instead of using a single computer, it distributes the workload over thousands of computers around the world, to simulate timescales thousands to millions of times longer than previously achieved.
OCAU is currently the
16th-highest fifth fifteenth ranked Folding@Home team in the world, not considering the standard Anonymous and Google based teams. The rankings in the Folding@Home statistics are of much pride, and are battled out between teams from everywhere united under almost any banner. OCAU was in stiff competition with the chief American team HardOCP, with the number 1 ranking changing hands often during the history of folding.
You can help Stanford University with the science by downloading and running the Folding@Home Client|target='_blank' Then just click the Download Button for Windows, or All Downloads for many other clients. Enter your team number as 24 to join OCAU. <a href="http://www.ravaged.com.au" target="_blank"> More information about OCAU's Folding@Home team can be found on the official Distributed Folding forum
We even have our own personal incentive badges of honor with Milestone forum signatures and avatars here.
OCAU Team 24 Folding@Home Wallpaper is available in the following resolutions
You can Fold on pretty much any PC and every bit helps however for those who wish to enhance their Folding performance or build dedicated Folding machines we have a guide with a example machine and stats on most CPU and GPU performance.
See Further down for Quick Guides
- Stanfords Uniprocessor Guide Simple basic client - low PPD
- Stanfords SMP Multi Processor Guide Recommended Client
- Stanfords GPU Guide Client Prefers Nvidia but works on ATI to
- Stanfords PS3 Guide Yes! Gaming Console can Fold
If your not running Windows thats fine to:
- Stanfords Linux Uniprocessor Guide
- Stanfords Linux SMP Multi Processor Guide
- Stanfords MAC SMP Multi Processor Guide
Windows is the most common Folding OS. Folding with the Console Clients is a tad quicker than the GUI versions. CPU Folding on Linux has generally been quicker than with Windows, however if your Folding on your GPU as well, Linux needs to run WINE and run the GPU Client from there thus running the clients as such are far slower than running on Windows.
*New* Client Control Center
New alternative control center FAH GPU Tracker V2 - You will need to change default settings to TEAM 24
Quick CPU Console Client Setup
Setting up a folding program is not hard at all. Below is a step by step guide on installing the CPU SMP command line client.
XP User - SP3 (Recommended) and .NET Framework v2.0 Needed
Vista User - .NET Framework is already installed by default
You MUST have a logon password for this client to run.
- Client Direct Link: XP+Vista CPU SMP Console Client
- Copy the client to "C:\FAH"
- Create a shortcut to the "FAH6.34-win32-SMP.exe" file from that directory, place it on your desktop (or where ever, doesn't matter)
- Right click the shortcut and go to properties. In the target field add the flags -smp -verbosity 9 It should look like this:
C:\FAH\FAH6.34-win32-SMP.exe -smp -verbosity 9
(-smp is the Auto Detect setting. You can manually choose how many cores including Hyperthreaded cores if you wish by adding the amount of cores after smp like so -smp 4 -verbosity 9. The number of cores you enter will be how many threads are launched. Never enter a number higher then your cpu has. IE if you have a 4 core cpu without hyperthreading you should not enter a number higher then 4.)
- Run the shortcut.
- You will now be prompted for answers regarding your setup
- Username - "Your Choice of Name here"
- Team Number - 24
- Passkey - "Your Passkey here"
- Ask before fetching/sending work - NO
- Use proxy - NO
- Acceptable size of work assignment and work result packets - Big
- Change advanced options - Yes
- Launch automatically, install as a service in this directory - NO (personal choice has been known to be buggy so for ease, NO)
- Core Priority - Low
- CPU usage requested - 100
- Disable highly optimized assembly code - NO
- Pause if battery power is being used - NO (only say yes for laptops)
- Interval, in minutes, between checkpoints - 15
- Memory, in MB, to indicate - Leave as default
- Set -advmethods flag always - Yes (yes can sometimes get you units with bonus points, sometimes it makes no difference)
- Ignore any deadline information - NO
- Machine ID - 1 (Read below on information on how to run multiple clients)
- Disable CPU affinity lock - NO
- Additional client parameters - leave blank
- IP address to bind core to -
Once that is done, the command window will now proceed on and start telling you about how it is downloading information. Let it run, and check that it is working and no errors are appearing. If there is, come into the F@H OCAU Forum and make a post, otherwise that's it, your done.
Just remember to have it Running (you can automate it by setting it up as a service)
We recommend you grab HFM.net to monitor your clients progess.
Quick GPU Systray Client Setup
Setting up a folding program is not hard at all. Below is a step by step guide on installing the GPU Systray client.
Nvidia GTX4xx Users, you need the new GPU3 client to fold with. see our forums Information in that thread will likely now be out of date, but it's a good place to start for some background.
Cards older then the GTX2xx series will need to run the GPU2 client. The GPU3 client will not run on those older cards.
Please have recent Video Card Drivers installed (ie. for Nvidia that is later than 177.79)
- Choose a username
- GPU3 Client Direct Link: XP+Vista GPU Systray Client
- GPU2 Client Direct Link: XP+Vista GPU Systray Client
- Install the client
- Start / Programs / Folding@Home-gpu / Folding@Home
- You will now have a icon in your Systray - Right Click / Configure
- User Tab
- Enter your Username
- Enter Team Number 24
- Enter Passkey if you have one (Passkey not needed for GPU but it doesn't hurt to enter it if you have one)
- Connection Tab
- Tick Yes to Allow receipt of work assignments and return of results greater than 10MB
- Advanced Tab
- Core Priority / Slightly Higher and push slider to 100%
- Tick Yes Do not Lock Cores to specific CPU
- Additional Client Parameters - There are a few options here are the most common flags to enter;
- Single GPU System "-gpu 0"
- Multi GPU System (please see Forum Setup Guide)
- Single Nvidia Card Client with problem "-gpu 0 -forcegpu nvidia_g80"
- Single ATI Card Client with problem "-gpu 0 -forcegpu ati_r700" (needed for HD5xxx Cards)
- Machine ID "1" (if your already running the CPU client then "2")
GPU Folding Tips see our Setup Sticky
- We recommend you grab HFM.net to monitor your clients progess.
Quick WinAFC Setup
Please note: The current SMP client is thread based, meaning one process that launches multiple threads. Use of WinAFC is no longer needed to maximise output and in fact can be detrimental to performance. The information has been left here though as it may still prove useful for some people with some setups.
WinAFC is a priority and affinity changer. This program is used to force how much priority your OS is giving to a certain program and how many CPU cores are also assigned to said program. So treat the program with respect, fooling around can cause harm to your system.
For Folding this is quite handy as your GPU clients can sometimes be left idling whilst your CPU client is crunching thus reducing your PPD. We typically like to set the CPU SMP client to use all available cores at a normal priority (*DO NOT SET CPU CLIENT HIGHER THAN NORMAL, it will make your desktop unusable). We can then set our GPU Clients slightly above the CPU at a High Priority.
Configuring WinAFC is done in a text file and so here is a example of what you can add. Below I have a example of code for a Quad Core with Hyperthreading. If you are using Dual Core or Quad Core with No Hyperthreading or Hex Core etc you will need to decrease the CPU count accordingly. (Count both real and virtual "Hyperthreading" cores)
4 Cores: CPU0+CPU1+CPU2+CPU3
6 Cores: CPU0+CPU1+CPU2+CPU3+CPU4+CPU5
8 Cores: CPU0+CPU1+CPU2+CPU3+CPU4+CPU5+CPU6+CPU7
Browse to your WinAFC Folder, Open the affinityinput.txt file, and scroll down to this:
################################################## ## Application Profile lines ################################################## # # An application profile is specified on a single line. # An application profile includes the following information: an application # name, a CPU mask, and optional attributes in the following format: # C:\Path\To\Application := CPU0+CPU1 [attr1=val1,attr2=val2] # # Check the documentation and the provided examples for more # information about these fields.
Beneath that section of text insert the following:
################################################## ## Folding Home Profile lines ################################################## # # CPU Client *FahCore_a3.exe := CPU0+CPU1+CPU2+CPU3+CPU4+CPU5+CPU6+CPU7 [priority=normal] # # GPU Client *FahCore_11.exe := CPU0+CPU1+CPU2+CPU3+CPU4+CPU5+CPU6+CPU7 [priority=high] *FahCore_14.exe := CPU0+CPU1+CPU2+CPU3+CPU4+CPU5+CPU6+CPU7 [priority=high] *FahCore_15.exe := CPU0+CPU1+CPU2+CPU3+CPU4+CPU5+CPU6+CPU7 [priority=high]
Specific Items of Note
There are several arguments you can set to improve the clients performance on your computers. Users should be very aware that use of these arguments may cause instability (rare) and slow response times for users. If you're not sure if you need them, you probably don't.
- bigpackets [yes/no]
This argument is set when you configure or reconfigure a client. It is not set at the commandline. By default, it is no, but if a user chooses yes, it allows reciept of WU's more than 5MB in size. These WU's have large points values assigned to them, often 600 points, so receiving them is appreciated if your system can process them in a timely fashion.
- -advmethods [yes/no]
When used in conjunction with the bigpackets argument (decided when configuring the client), -advmethods will allow the reciept of experimental WU's, such as QMD's, that may adversly impact the performance of your computer/s. Experiences with this argument are varied, but the generally accepted solution is to only run one (1) client per machine which accepts these highly experimental WU's. Any additional clients on a given computer should run without the -advmethods argument. Currently, QMD WU's will only run on Intel CPU's due to a licensing issue with the compiler and the Intel math library.
- Multiple Clients
You no longer need to run Multiple CPU clients as the New SMP2 A3 Core Client or NotFreds Virtual Machine Client are capable of using multi core CPU systems. You can run either a CPU client or GPU client, however running both is most welcome and a fast way of adding up PPD. (You will need one GPU client per Video Card "2 clients for multi core cards like GTX295"). Also when you are installing the clients, you must give each one a different Machine ID (Step 19 Above)
- HD5xxx GPU Clients
You will need to add this flag -forcegpu ati_r700
Folding 'Flags' explained
You will at times hear people refer to various 'flags' or 'arguments'. These are used to give additional access and or functionality to the folding clients. Many of the setup guides will have included the use of these flags as they are necessary for certain types of folding.
The flags are used in one of several different ways. You can add them to the end of the target field in a shortcut that points to the folding .exe, you can specify them in the 'Additional parameters' section of the folding config, or you can add them to the 'Additional parameters/arguments' field in the systray clients setup.
-advmethods * Tells the client to download new and/or experimental units. Sometimes good for getting bonus points. Keep in mind there are at times only so many types of unit available. During those times specifying this flag won't net you a different unit to what someone not using the flag would get. When units are released by Stanford that require this flag there will usually be a post made in the official Distributed Folding forum discussing the pro's and con's of running them.
-verbosity 9 * Tells the client to include the maximum amount of information in the log file. Useful for diagnosing client errors. It is recommended that you use this flag in every folding setup, every time. It's no use adding the flag after the error has occurred. If it's there already though, you'll have as much information as is available to help you sort out your problems. If you post in the forums asking for help you can expect to be asked to post your log file. Without using this flag the log will likely not include any of the helpful information that people will need to help you with your problem.
-configonly * Tells the client to open and only offer configuration options. Once the config is complete the client will close. Useful if you need to change settings after initially setting up the client.
-delete x * Tells the client to delete the specified unit. Useful if you get stuck with a unit that keeps crashing over and over.
-send x * Tells the client to return the specified unit. Useful if you get a unit that hasn't uploaded for some reason.
-oneunit * Tells the client to fold one unit and then pause.
For the gpu:
-gpu x * Specifies which gpu you want the client to fold on. -gpu 0 for the first, -gpu 1 for the 2nd etc.
Pretty self explanatory, forces the client to run on a card that matches the description. Useful if you're mixing cards. Usually not required for single card rigs, or identical card rigs. For folding on ATI cards though you may have to use the flag no matter what.
For the cpu:
-smp x * Tells the client to fold an smp unit, x specifies how many cores you wish to use. Not 100% necessary to specify the cores, you can use just -smp, but the occasional hiccup can be resolved by using the full -smp x configuration.
-bigadv * Tells the client to fold a big advanced unit. These are primarily designed to run on dual socket dual quad core systems. They will run on an i7 rig suitably overclocked though. Best used on a dedicated folding machine, not on shared use box.
-forceasm * Tells the client to ignore the circumstances of the previous shutdown and run with fully optimised code. If you close the cpu/smp client for any reason, or your system crashes, the client will restart and disable sse instructions and run what is referred to as 'standard loops'. This slows folding considerably. If you are confident that your system is stable you should use this flag. Please note: If you use this flag and your system is not stable, you run the risk of returning garbage instead of results. This can only be detrimental to the project.
If you need any more help with anything feel free to make a post in the F@H OCAU Forum.
3rd Party Monitoring Tools
- HFM.net (recommended)
- Electron Microscope
- FAHLogStats.NET (new - 19 August 2006)
- FAHLogStats (currently not being updated/supported)
- EOC Folding@Home Stats (recommended
- Official All Teams
- Official OCAU
- Official Individual
- OCAU Statsnon-working
- Mazzanet's Stats non-working
- PriFinitty - Process Priority/Affinity Changer
- Bill2's Process Manager - Process Priority/Affinity Changer
- Trayit - Allows minimizing console windows to Systray
- Active 395 Down 12
- Avg User PPD 4,769.60 Up 481.5
PPD Avg / Users / Total
- 6k+ / 55 / Up 3 / 1,583,796
- 3k+ / 35 / Down 1 / 152,293
- 1.5k+ / 29 / Same / 61,059
- 800+ / 32 / Down 9 / 34,962
- 400+ / 40 / Down 7 / 23,218
- 200+ / 56 / Up 3 / 15,484
- 100+ / 65 / Up 8 / 9,449
- 1+ / 83 / Down 5 / 3,746
Total PPD Team Avg 1,884,002 Up 138,743
- Active 407 Down 7
- Avg User PPD 4288.1 Up 303.4
PPD Avg / Users / Total
- 6k+ / 52 / Same / 1,425,867
- 3k+ / 36 / Up 10 / 156,386
- 1.5k+ / 29 / Down 5 / 63,911
- 800+ / 41 / Up 1 / 44,926
- 400+ / 47 / Up 1 / 27,279
- 200+ / 53 / Down 9 / 14,692
- 100+ / 57 / Down 9 / 7,765
- 1+ / 88 / Up 4 / 4,435
Total PPD Team Avg 1,745,259 Up 95,599
- Active 414 Down 42
- Avg User PPD 3984.7 Up 407.3
PPD Avg / Users / Total
- 6k+ / 52 / Up 12 / 1,364,311
- 3k+ / 26 / Down 4 / 110,114
- 1.5k+ / 34 / Up 4 / 70,864
- 800+ / 40 / Up 1 / 45,864
- 400+ / 46 / Down 9 / 27,130
- 200+ / 62 / Down 7 / 18,292
- 100+ / 66 / Up 4 / 9,258
- 1+ / 88 / Down 43 / 3,843
Total PPD Team Avg 1,649,660 Up 18,353
- Active 456 Down 9
- Avg User PPD 3,577.40 Up 611.3
PPD Avg / Users / Total
- 6k+ / 40 / Down 5 / 1,334,199
- 3k+ / 30 / Up 1 / 127,766
- 1.5k+ / 30 / Down 12 / 61,490
- 800+ / 39 / Up 4 / 42,153
- 400+ / 55 / Down 2 / 31,892
- 200+ / 69 / Same / 19,504
- 100+ / 62 / Down 13 / 8,622
- 1+ / 131 / Up 18 / 5,678
Total PPD Team Avg 1,631,307 Up 252,052
- Active 465 Up 14
- Avg User PPD 2966.1 Down 358.7
PPD Avg / Users / Total
- 6k+ / 45 / Down 1 / 1,052,383
- 3k+ / 29 / Down 9 / 130,509
- 1.5k+ / 42 / Up 13 / 89,429
- 800+ / 35 / Down 3 / 38,966
- 400+ / 57 / Up 2 / 31,583
- 200+ / 69 / Up 8 / 20,264
- 100+ / 75 / Up 15 / 10,675
- 1+ / 113 / Down 11 / 5447
Total PPD Team Avg 1,379,255 Down 120,229
- Active 451 Down 17
- Avg User PPD 3,324.8 Up 77.5
PPD Avg / Users / Total
- 6k+ / 46 / Same / 1,169,852
- 3k+ / 38 / Down 4 / 161,833
- 1.5k+ / 29 / Down 6 / 62,902
- 800+ / 38 / Down 16 / 41,064
- 400+ / 55 / Down 5 / 31,326
- 200+ / 61 / Down 17 / 17,654
- 100+ / 60 / Down 4 / 8,372
- 1+ / 124 / Up 35 / 6,481
Total PPD Team Avg 1,499,484 Down 20,250
- Active 468 Down 22
- Avg User PPD 3,247.3 Up 357.4
PPD Avg / Users / Total
- 6k+ / 46 / Down 1 / 1,150,841
- 3k+ / 42 / Up 7 / 168,305
- 1.5k+ / 35 / Down 9 / 72,678
- 800+ / 54 / Down 4 / 57,114
- 400+ / 60 / Up 2 / 34,516
- 200+ / 78 / Up 11 / 21,981
- 100+ / 64 / Down 5 / 8,829
- 1+ / 89 / Down 23 / 4,900
Total PPD Team Avg 1,519,734 Up 103,680
- Active 490 Down 152
- Avg User PPD 2,889.9 Down 356.1
Minor Update Production Totals
- 6k+ / 47 / Down 18
- 3k+ / 35 / Down 20
- 1.5k+ / 44 / Down 7
- 800+ / 58 / Down 23
- 400+ / 58 / Down 36
- 200+ / 67 / Down 32
- 100+ / 69 / Down 25
- 1+ / 112 / Up 9!!!
Total PPD Avg 1,416,054 Down 667,876
- Active 642
- Avg User PPD 3,246.00
PPD Avg / Users / Total
- 6k+ / 65 / 1,554,783
- 3k+ / 55 / 233,682
- 1.5k+ / 51 / 108,100
- 800+ / 81 / 86,095
- 400+ / 94 / 54,393
- 200+ / 99 / 27,919
- 100+ / 94 / 13,086
- 1+ / 103 / 5,872
Total PPD Team Avg 2,083,930
- FAH Addict
- Old OCAU Folding webpage - Outdated and inactive
- Offical server stats
- Offical list of running projects
- Offical details for every project
- The Folding@Home Community Wiki
Our TOP 10 Folding Competitors
- Custom PC & bit-tech
- TSC! Russia
- Alliance Francophone
- Team MacOS X
The Battle rages
Over the history of the Project several teams have vied for the title of #1 contributor.
OCAU first took the No1 position on the 29-Oct-2001, 10 days after F@H2 started. oc.com and the Francophones also held the No1 spot in the first 10 days as well.
29/10/01 to 07/03/02: OCAU 129 days
07/03/02 to 11/03/02 [H] 4 days
11/03/02 to 12/03/02 OCAU 1 day
12/03/02 to 11/06/02 [H] 90 days
11/06/02 to 16/06/02 OCAU 5 days
16/06/02 to 17/06/02 [H] 1 day
17/06/02 to 20/06/02 OCAU 3 days
20/06/02 to 15/09/03 [H] 450 days
15/09/03 to 11/01/04 OCAU 118 days
11/01/04 to 23/06/04 [H] 162 days
23/06/04 to 21/04/05 OCAU 302 days
21/04/05 to 12/05/05 [H] 21 days
12/05/05 to 30/12/05 OCAU 231 days
31/12/05 to 21/09/06 [H] 264 days
So, as at 21-09-2006, sub-totals give OCAU as having 793 days in total as number #1 team, while [H] have had 993 days in the lead.
The Relic Trophy
Originally given to victims of The [H]orde as they overtook their hapless adversaries (tradition originally started by Kvizbar ([H]) against ARS Technica), The Duck was given new life by fxr91 (OCAU) when he Photoshopped/Gimped The Duck onto a stand and christened it. The Relic Trophy was born. Each time the trophy-holder is passed by the other team, the Relic Trophy is e-handed to the new #1, once it's positive the lead change isn't short-lived (usually a few weeks must pass before the handover))
- OCAU's F@H team is sponsored by Computer Alliance.