• Home
  • Readings
  • Github
  • MIES
  • TmVal
  • About
Gene Dan's Blog

Monthly Archives: July 2012

You are browsing the site archives by month.

No. 68: SimCity

31 July, 2012 3:01 AM / Leave a Comment / Gene Dan

Hey everyone,

Looking at satellite images all of last Saturday reminded me a lot about playing SimCity, so I decided to pick it up again over the weekend. I don’t play video games that much (at least not consistently), and sometimes I’ll abandon a game for months or years because I’ll have something better to do. For SimCity, the cycle proceeds as follows:

  1. Buy the game, quit in frustration after a month because I can’t get my city to grow
  2. Come back a few months later, finally get the city to grow a little, and then quit in frustration when its gets destroyed by an earthquake
  3. Come back a few months later, get a little better, then quit because summer’s over and I have to go back to school
  4. Come back after a couple of years, finally get good at the game, only to learn that a new version’s about to come out
  5. Purchase the new game, repeat…

Not long after I got my region to finally grow past 300,000 citizens, I learned that the next iteration in the franchise, SimCity 5, will be released sometime next year. I’m really excited about this next version since the current one’s been around for almost 10 years (despite that, the graphics have aged really well). Some of the early screenshots looked unpleasantly cartoonish and lacking in detail, but the artwork’s gotten better after each press release, so hopefully the game delivers.

Anyway, I’d like to get the population of my region past one million before next year, since I haven’t done that in any of the game’s previous versions (except by using arcologies in SC2). Here’s a snapshot of what I have so far:

This image represents a small portion of a huge terrain map that I downloaded from the Simtropolis community. Back in 2003, when the game first came out, I kept running out of room with the stock regions so I decided to get something bigger this time around. Here you see a cluster of 4 cities connected by 2 bridges spanning a river. Here’s a snapshot of Sector 0, the oldest and largest city in the region:

And here’s a closeup shot of downtown:

So basically, the idea of the game is to do whatever you want with your city as long as it doesn’t go bankrupt. For beginners, the hardest part is balancing the budget. In past versions, as soon as you mastered the budget, you pretty much mastered the rest of the game. However, in SC4 the additional task of traffic micromanagement made it difficult for even seasoned players to expand their cities. It partially wasn’t the players’ fault, as fans discovered a bug in the traffic engine that caused citizens to only pick the shortest, but not the fastest routes for their commutes. This wasn’t discovered until the late 2000s, several years after SC4 had a chance to make an impact. I think this is one of the reasons why so many people found the game frustrating, and may have been a contributing factor in EA’s decision to not create another version until 2013. Fortunately, a network modification tool has since fixed the problem, making it much easier to manage traffic:

The above image shows shows highway usage in the northeast section of Sector 0. This highway connects people living in the suburbs to the downtown area and the neighboring city to the east. In previous versions, you only needed to connect your roads and your city would grow – however, in the current version you have to consider capacity, speed, noise, and distance of the various modes of transportation – otherwise, the city will fail to grow. I think the developers purposely made this the most important part of the game.

Eventually, I’d like to get into modding, since apparently EA/Maxis plans to make SC5 “fully moddable” (whatever that means). I’ve heard that the new Glassbox engine has a more realistic economic simulator, and I’d really like to see how it works if the developers open up the design. Unfortunately their last attempt at rebooting the SC franchise (SC Societies) failed miserably, and it looks like SC Social will fail shortly…I suppose in the worst case I’ll at least have SC4.

Posted in: Logs

No. 66: CPU Stress Testing with GIMPS

17 July, 2012 1:29 AM / Leave a Comment / Gene Dan

Hey everyone,

Last week I wrote about liquid cooling and overclocking my Linux server. I spent that Sunday mostly fiddling around with the CPU multiplier and voltage settings, but I didn’t subject the machine to any lengthy stress testing because I mainly wanted to see how high I could safely overclock the core. My friend Daniel told me that if I wanted to truly test the stability of a particular overclock setting, I’d have to test the computer over the course of several hours to make sure the programs ran correctly and that no wild temperature fluctuations took place. Furthermore, I’d have to run two separate batteries of tests – (one with the AC on and one without) to make sure that the machine wouldn’t overheat without air conditioning.

Unfortunately, I couldn’t complete the entire experiment because it rained almost every day last week (and will continue to rain each day this week), which meant that the temperatures wouldn’t be hot enough outside (hence, inside) to test the machine under summer conditions. However, I still had the opportunity to see how the computer would operate in cool conditions, which I had originally intended to do as a control. Thus, I decided to test the CPU using GIMPS at 5 clock settings: 3200 MHz, 3360 MHz, 3519 MHz, 3680 MHz, and 3840 MHz – with 3200 MHz as the stock setting.

Stress testing at 3840 MHz

The test was simple. I’d first use the terminal to dump the motherboard’s temperature readings into a text file, run GIMPS over the course of a workday (at least 9 hours), and then import the results of the test into an Excel spreadsheet to compare the results. I was able to find some code on how to make the text file by searching ubuntuforums.org, from which I used the following loop to log the temperatures each minute over the course of each test:

while true; do sensors >> log.txt; sleep 60; done

Which logged the following output each minute into a text file:

w83627dhg-isa-0290
Adapter: ISA adapter
Vcore: +1.04 V (min = +0.00 V, max = +1.74 V)
in1: +0.00 V (min = +0.06 V, max = +1.99 V) ALARM
AVCC: +3.28 V (min = +2.98 V, max = +3.63 V)
+3.3V: +3.28 V (min = +2.98 V, max = +3.63 V)
in4: +1.84 V (min = +0.43 V, max = +1.28 V) ALARM
in5: +1.70 V (min = +0.66 V, max = +0.78 V) ALARM
in6: +1.64 V (min = +1.63 V, max = +1.86 V)
3VSB: +3.49 V (min = +2.98 V, max = +3.63 V)
Vbat: +3.44 V (min = +2.70 V, max = +3.30 V) ALARM
fan1: 0 RPM (min = 2636 RPM, div = 128) ALARM
fan2: 2163 RPM (min = 715 RPM, div = 8)
fan3: 0 RPM (min = 1757 RPM, div = 128) ALARM
fan5: 0 RPM (min = 2636 RPM, div = 128) ALARM
temp1: +27.0°C (high = +0.0°C, hyst = +100.0°C) sensor = thermistor
temp2: +27.0°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor
temp3: +32.0°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor

k10temp-pci-00c3
Adapter: PCI adapter
temp1: +27.5°C (high = +70.0°C)

radeon-pci-0200
Adapter: PCI adapter
temp1: +55.5°C

You can see that the above output is quite cryptic – and it took me a while searching the forums until I found out that the CPU reading was denoted by “k10temp-pci-00c3.” Because the loop recorded these temperatures every minute, I was able to use the fact that each temperature reading repeated every 27 lines to write a loop in VBA and extract these readings into an Excel spreadsheet:

Option Explicit

Sub import_temperatures()
Dim r As Long, m As Long
Dim temperature As String

Range("A2:E1000000").Clear

Open "C:UsersGeneDesktop3840.txt" For Input As #1

r = 1
m = 0
Do Until EOF(1)
Line Input #1, temperature
If r = 16 Or ((r - 16) Mod 27 = 0) Then
Range("A2").Offset(m, 0).Value = Right(Left(Trim(temperature), 19), 4)
ElseIf r = 17 Or (r - 17) Mod 27 = 0 Then
Range("B2").Offset(m, 0).Value = Right(Left(Trim(temperature), 19), 4)
ElseIf r = 18 Or (r - 18) Mod 27 = 0 Then
Range("C2").Offset(m, 0).Value = Right(Left(Trim(temperature), 19), 4)
ElseIf r = 22 Or (r - 22) Mod 27 = 0 Then
Range("D2").Offset(m, 0).Value = Right(Left(Trim(temperature), 19), 4)
ElseIf r = 26 Or (r - 26) Mod 27 = 0 Then
Range("E2").Offset(m, 0).Value = Right(Left(Trim(temperature), 19), 4)
m = m + 1
End If
r = r + 1
Loop

Close #1

End Sub

The test took 5 days to complete, so I had to be patient. Here are the results:

Stress Testing Results at 100% Load

You can see that the results are very impressive. At stock settings, the CPU temperature hovered at around 45 degrees Celsius at 100% effort. This means that I can leave the computer on all day and even the most intensive task won’t push the temperature past 50 degrees (or not even past 47 degrees). Even at 3840 MHz, the temperature stayed at around 55 degrees Celsius over the course of 9 hours. I did however, have to increase the voltage for clock speeds of 3519 MHz and above, so I’m not sure if the temperature increases beyond that speed were due to voltage increases, multiplier increases, or a combination of both. Moreover, I’m not sure if the increased clock speeds made GIMPS run any faster, since the per-iteration time seems to depend on which exponent you are testing (I’m sure there’s a way, though).  Nevertheless, I’m very satisfied with the results and the ability of the liquid cooling system to keep temperatures stable while I’m away from home.

Posted in: Logs / Tagged: corsair h80, cpu benchmarking, GIMPS, liquid cooling, mprime, overclocking, overclocking AMD phenom II, prime 95

No. 65: Liquid Cooling & Overclocking

10 July, 2012 1:59 AM / 1 Comment / Gene Dan

Hey everyone,

A while back I wrote about a Linux server I set up in order to do statistical work remotely from other computers. So far, I haven’t done much with it other than learn R and LaTeX, but recently I’ve discovered that it would be a great tool to document some of the algorithms I’ve developed through my modeling projects at work in the event that I would ever need to use them again (highly likely). Back in January, I wrote that I was concerned about the CPU getting too hot since I left it on at home while I was away at work. Since I leave the AC off when I’m gone, the air going into the machine would be hotter and would hinder the cooling ability of the server’s fans.

Original setup with stock AMD fan + heatsink

I could leave the AC on, but that wouldn’t be environmentally friendly, so I’ve been looking for other solutions to keep my processor cool. One of the options I decided to try was liquid cooling – which I heard was more energy efficient and better at cooling than traditional air cooling found on stock computers. Moreover, I had seen some really creative setups on overclockers.net – which encouraged me to try it myself. To get started, I purchased a basic all-in-one cooler from Corsair. This setup isn’t as sophisticated as any of the custom builds you’d see at overclockers, but it was inexpensive and I thought it would give me a basic grasp on the concept of liquid cooling.

The installation was pretty easy – all I had to do was remove the old heatsink and screw in pump/waterblock into the CPU socket. Then, I attached the 2 x 120 mm fans along with the radiator to the back of the case:

New setup with Corsair H80 system installed

However, one of the problems with these no-fuss all-in-one systems is that you can’t modify the hose length, which might make the system difficult or impossible to install if your case is too large or too small. As you can see, I got lucky – the two fans along with the radiator barely fit inside my mid-tower Antec 900 case. If it were any smaller the pump would have gotten in the way and I would have had to remove the interior fan to make it fit. Nevertheless, I’m really satisfied with the product – as soon as I booted up the machine I was impressed by how quietly it ran.

Naturally, I decided to overclock the processor to test the effectiveness of the new cooling system. I increased the clock speed of the CPU (AMD Phenom II) from 3200 MHz to 3680 MHz and ran all 4 cores at 100% capacity to see how high temperatures would get. Here are the results below:

Overclocking at 3680 MHz

You can see that the maximum temperature was just 46 C – that’s pretty cool for an overclocked processor. I only ran the test for a few minutes because I had been steadily increasing the clock speed little by little to see how far it could go. The test ran comfortably at 3519 MHz, but as soon as I reached 3680 MHz the computer started having issues with booting up. I was able to reach 3841 MHz by increasing the voltage to 1.5 V and 3999 MHz by increasing the voltage to 1.55 V. I was somewhat disappointed because I couldn’t get the clock speed to surpass 4 GHz (as the Phenom II has been pushed to much higher clock speeds with more sophisticated cooling techniques). At this point I couldn’t even run mprime without having my computer crash, but I was able to continue the stress testing by using BurnK7:

Stress testing with BurnK7 at 100% load – 3999 MHz

You can see that the core temperature maxed out at 60 C, so I’m pretty sure I could have pushed it a little further. However, the machine wouldn’t even boot up after I increased the multiplier, so I called it a day. I contacted my friend Daniel Lin (who had been overclocking machines since middle school) with the results, and he responded with his own stress test using an Intel Core i7 quad core:

Daniel Lin’s machine at 4300 MHz

The impressive part is he was able to reach 4300 MHz using nothing but stock voltages (1.32 V) and air cooling. He told me that I had an inferior processor and I believe him (then again, you get what you pay for – the Intel i7 is three times more expensive). If he had liquid cooled his computer he probably could have pushed it even further. Anyway, Daniel told me that you can’t be sure if an overclock is truly stable unless you stress test it over the span of several hours. So, I decided that my next task would be to get Ubuntu’s sensors to output its readings into a text file while I run mprime over the course of 24 hours. I’d also like compare temperature readings depending on whether or not the AC is turned on while I’m away at work. I’ll have the results up next week (hopefully).

Posted in: Logs / Tagged: antec 900, corsair h80, liquid cooling, overclocking

No. 64: Player Piano

3 July, 2012 2:21 AM / Leave a Comment / Gene Dan

Hey everyone,

I started reading Kurt Vonnegut’s Player Piano after I passed C/4 last month. The plot centers around an engineer named Paul Proteus and takes place in an alternative post WW2 era, in which machines have have displaced almost all of human labor. The only jobs left are for engineers, business managers, and hairstylists. As the story progresses, the machines get so good even the engineers end up losing their jobs – hence the title, Player Piano (a Piano that plays music without the need of a human performer). I guess you can see where I’m going with this…I wouldn’t want to spoil the rest, and I haven’t finished the book myself though I’m just 20 pages shy of finishing. Anyway, I became interested in the novel when I was searching the web for articles on post-scarcity economics. The basic idea is that classical economics stems from the conflict between unlimited human wants and limited natural resources. These wants are satisfied through the exchange of goods and services – two or more parties mutually agree to exchange resources – and this is done by valuing one party’s resources against another party’s resources. However, if a society were to achieve the ability to costlessly produce goods and services, the system of exchange and valuation breaks down because you’d no longer be able to value one good relative to another, which makes it impossible for two parties to come to an agreement on how to exchange goods. Some people believe such a system would be more egalitarian since people would no longer have to fight over limited resources. On the other hand, others have hypothesized that such a system would lead not to equality, but to a society dominated by an elite few – the original owners of capital (this is where post-scarcity economics overlaps with Marxist economics).

I’ve thought about such a scenario many times (maybe every other day) but I haven’t been able to reach any solid conclusions over the outcome of this situation. First of all, companies can save costs by automating manual labor and firing workers whose skills have become obsolete. This would result in short term gains because the company would be able to gain market share over its competitors by offering lower prices. However, in order for the company to make money, people would have to be willing to pay for its products. But if such automation were to occur on a large scale, you would end up in a paradoxical situation in which companies would be able to produce goods at no cost – but people wouldn’t be able to buy these goods because they don’t have jobs and aren’t earning wages. Moreover, if these workers aren’t buying goods, then the company won’t make any money. Thus, despite the economy’s ability to produce unlimited goods and services for its people, these people aren’t made any richer because there’s no longer a way to allocate these goods amongst themselves.

Historically, this sort of doomsday scenario hasn’t occurred because automation created more jobs than it destroyed. However, you don’t want to be too careless and assume that automation will continue to create jobs indefinitely just because it has in the past, since there’s no guarantee that this trend of job creation will continue. Perhaps, it might be possible that machines become so effective that they can replace all of human labor…or maybe we’ll eventually get to the point where we can costlessly create humanoid robots that are superior to their biological counterparts, rendering humans obsolete. On the other hand, it might be the case that a post-scarcity society is impossible to achieve. I noticed that in my previous paragraph that the inability to equitably allocate resources amongst a population represents scarcity in service (or scarcity in capital, if you want to take the Marxist point of view). So, while we might not reach post-scarcity, there could be some kind of scenario like “post-human labor”, which would present similar problems.

Anyway, Player Piano is somewhat similar to Zamyatin’s We, and Vonnegut himself said that he “cheerfully ripped off the plot of Brave New World, whose plot had been cheerfully ripped off from Yevgeny Zamyatin’s We.” I read We during the Summer after high school and now that I’m almost done with Player Piano, I can see that the basic structure is similar between the two books, though Vonnegut’s book contains more humor and has a more playful tone than Zamyatin’s. Vonnegut’s book was published in the U.S. after WW2, whereas Zamyatin’s book was written in 1920, suppressed by the Soviets during the Cold War, and later published in 1988. I’d recommend reading both as you’d get to compare the different perspectives between the Soviet Union and the United States, both before the War and afterward.

Posted in: Logs / Tagged: automation, dystopia, post-scarcity economics, vonnegut player piano

Archives

  • September 2023
  • February 2023
  • January 2023
  • October 2022
  • March 2022
  • February 2022
  • December 2021
  • July 2020
  • June 2020
  • May 2020
  • May 2019
  • April 2019
  • November 2018
  • September 2018
  • August 2018
  • December 2017
  • July 2017
  • March 2017
  • November 2016
  • December 2014
  • November 2014
  • October 2014
  • August 2014
  • July 2014
  • June 2014
  • February 2014
  • December 2013
  • October 2013
  • August 2013
  • July 2013
  • June 2013
  • March 2013
  • January 2013
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • January 2011
  • December 2010
  • October 2010
  • September 2010
  • August 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • September 2009
  • August 2009
  • May 2009
  • December 2008

Categories

  • Actuarial
  • Cycling
  • Logs
  • Mathematics
  • MIES
  • Music
  • Uncategorized

Links

Cyclingnews
Jason Lee
Knitted Together
Megan Turley
Shama Cycles
Shama Cycles Blog
South Central Collegiate Cycling Conference
Texas Bicycle Racing Association
Texbiker.net
Tiffany Chan
USA Cycling
VeloNews

Texas Cycling

Cameron Lindsay
Jacob Dodson
Ken Day
Texas Cycling
Texas Cycling Blog
Whitney Schultz
© Copyright 2025 - Gene Dan's Blog
Infinity Theme by DesignCoral / WordPress