Were one or more machines required to be running 24/7 on the network after the first block was mined?
He didn't have to, and it appears that he actually didn't. Block #1 has a timestamp that's more than 5 days later than the genesis block. Any decent computer in 2009 would have been able to mine a block at difficulty 1 much faster than that, most likely within a couple of hours. (I have a computer I bought in 2010, not particularly fast by the standards of those time, and it produces about 2 MHash/sec, so that it would mine a difficulty 1 block in an average of about 2000 seconds or about 33 minutes.) So almost certainly, neither Satoshi nor anybody else was mining during those 5 days. There's another gap of 24 hours between blocks #14 and #15, which suggests nobody was mining during that time period either.
I don't think we know this. It's likely he did some sort of testing with multiple nodes prior to release, but I don't know of any data for how many nodes there were on the live network in the earliest days after release. We also would have no way to know for sure whether they were being operated by Satoshi or someone else.Then no blocks get generated, that's all. Nobody collects any block rewards if nobody is mining. If any transactions were created during that time, they could not have been confirmed until after someone started mining again. It's not inherently necessary that people be mining at all times - things start right back up when they come back. The software can't really tell the difference between "nobody is mining" and "people are mining, but due to bad luck there hasn't been a new block for a while".
As mentioned above, probably not.
â€¢ Suggested Reading
Where is the old option TraceInternal->True?
Let me first answer your second question, since I can only guess about the main question:It's really just the coloring that goes wrong and has nothing to do with functionality. You can see that it is not even related to TraceInternal by using a officially documented optionThe reason for the wrong coloring is the FunctionInformation.m file. There, the coloring patterns for each built in function can be found and the one of Trace is just wrongAs you see, Trace is supposed to have 1 or optionally 2 arguments which is not correct. Fix it with the following and everything looks non-red:In the case of the behavior of TraceInternal one should first note, that it indeed still works. You can test this by e.g. search this site for TraceInternal and try the examples you find. You will see that it indeed traces internal functions, just not Roots.
What I suspect is that Wolfram turned tracing for some part of the Solve code off. I can kind of mimic something like this by using Block to temporarily overwrite the value of $TraceOn or $TraceOff. With this, I can prevent that the tracer steps into some part of my code. It seems Wolfram did something else because I couldn't find traces of something like that when looking closer at your Solve call. By looking closer I mean something like the following which prints the evaluation stack when Roots is reached.In the above output, you will find many things that are called, for instance PolynomialGCD.
. Still it is not possible to catch those expressions with trace. In my opinion the more general question should be: How is it possible to make parts of the evaluation completely invisible to trace?
Disable compression on SSL/TLS connections in Apache 16 using mod_header
First, CRIME only applies if your website uses all of these three: It is only useful for hijacking active sessions, and is most useful if your server doesn't require session IP matching. While many websites do use this combination, it's not as common as many would think. Also, some statistics suggest that What Does NOT Work:
The Vary field just tells upstream proxies if they're allowed to cache a dynamic page. While it's important to consider for your caching strategy, not so much for this particular vulnerability.Unsetting the Accept-Encoding field will only affect mod_deflate or mod_gzip; it doesn't affect compression by SSL/TLS. So your method will not work.What Does Work:
There are two options for protecting your server. You can disable compression support in your SSL/TLS library, by recompiling it without compression; or you can patch your server to support the SSLCompression directive. Apache 2.4.x supports this directive natively. Apache 2.2.22 can be patched relatively easily. Various Operation System Distributions are back-porting the patches now, check with your Distro provider for details (most Linux Distro use ancient versions of Apache that they've written custom back-ports of security patches for. So you'll pretty much be at your Distro's mercy if you're using their sanctioned packages).How Sure Are You:
There's a very easy to use SSL "Problem" Scanner available from SSL Labs. It will detect if your server is CRIME vulnerable. You can semi-ignore BEAST warnings as all modern browsers have fixed the issue client side. It would depend on your particular set of circumstances however.
I think there is something wrong with this problem. How do you know if the vertical acceleration is zero?
You're correct that the vertical component of Newton's second law should be$$sum F_y ma_y$$You set $a_y 0$ because the block is not flying up off the table. This is implied by the wording of the problem: usually blocks are assumed not to be flying up off their tables unless it is explicitly stated that that is a possibility. Plus, the fact that they ask for an apparent weight suggests that the block is staying against the surface it sits on.
However, you can also show that the block stays on the table if the diagonal force is $15text N$. To do so, you use the rule that the normal force will be as strong as it has to be to cancel out the forces pushing the block against the surface.In detail, you add up all forces acting on the block other than the normal force, which in this case gives you $Fsintheta - mg$. If this "subtotal force" is directed into the surface, the normal force will have the same magnitude but will act away from the surface, so that the net force including the normal force is zero.$$N sum_textother F_perp 0$$On the other hand, if the "subtotal force" is directed away from the surface, the normal force will be unable to counteract it. In that case the normal force will be zero, and there will be a net force on the block. Since net force equals acceleration, you can then conclude that the block will have an acceleration perpendicular to the table, i.e. it will fly up off the table.In short, the status of the block's motion and the normal force depends on the sum of the other forces:
Windows 2008: re-use of deleted blocks on virtual thin disks
Check with the vendor but you are likely wasting your time archiving. The allocated space to the LUN will not shrink by virtue of deleting files in Windows. Note that the allocated space to the LUN is different than the size of the LUN itself. If you thin provision a 100GB LUN, and write 10GB of data to it, the SAN will allocate 10GB worth of raw disk blocks on its underlying disks to the LUN. Then, when Windows wants to write to a new block, this grows the % of space on your thin LUN that becomes allocated/provisioned. Over time, as Windows requests to write to pristine (never-touched) blocks, those blocks will be allocated by the SAN from its global pool of unused blocks, and the allocated/provisioned size of the LUN will increase further.
Eventually with enough data churn, a thin provisioned LUN will become thick provisioned. It may take a long time, but it depends entirely on the OS's behavior.Without special software (that basil has mentioned), the SAN has no way of knowing which blocks can be reclaimed, as the SAN can't "see" NTFS (or any other filesystem) by itself. Additionally, most of the time you need to have this software running in Windows before the volume becomes thickly provisioned, but again check with the vendor.In general, thin provisioning buys you time (you don't have to allocate all your storage at the get-go) but eventually you will need to back your volumes 100% with storage.Note, my understanding is that Linux does prefer to overwrite blocks instead of using pristine ones, but I don't have a reference to back that up.
Asynchronous commit on Postgres over iSCSI with BBU storage
Actually, that's kind of backwards. Turning synchronous_commit off is unsafe no matter what, in that it permits the database to lose recently committed transactions if it crashes. This is true with, or without, BBU or crash-safe SSD storage.What synchronous_commit off does is allows you to trade durability for speed when you're on storage where flushes are expensive so you want to batch them up and want to avoid client latency while apps wait for flushes. It has much less effect on storage where flushes are fast, since you do less waiting for commits to happen - so it's pretty much all downside and little benefit.In general, you should not set synchronous_commit off globally. As the documentation advises, you should SET LOCAL synchronous_commit off in specific transactions that you don't need to be durable, leaving it enabled otherwise. That way you might lose transactions you're not so fussed about, but not the ones that're making important changes.If you can't afford potential loss of transactions that clients think are committed at all, you may instead want to consider a commit_delay, which pauses to try to batch a few commits together before flushing. This can produce a throughput improvement on I/O subsystems with really slow flushes without sacrificing durability.
You may also want to consider using UNLOGGED tables for specific tables you can afford to lose in a crash. If Pg crashes while the table is dirty it'll truncate it and you'll have to re-populate it, but there'll be no wider database corruption.If anyone tells you to turn fsync off - please don't. It should be called eat_my_data on, and is totally unsuitable for anything except throw-away instances where you can easily reconstruct the lot after a crash