Acceso sincronizado al recurso

Suppose we have some resource(a file on disk) in which we have to write bytes produced by different threads. These threads are spawned by some process that listens to some events and spawns a thread every time when event occures. As we have only one resource, we have to synchronize methods of the class which performs write operation:

    synchronized void write(byte [] bytes) {
       //write data to file
    }

or create some mutex:

Object mutex = new Object();
void write(byte [] bytes) {
   synchronized(mutex) {
      //write data to file
   }
}

And now suppose that we have very old hard drive, so it performs write operation too slowly. And several times a day we have very large amount of events occured. So the threads will make something like queue to the resource. So I have next questions:

  1. How long such queue could be?
  2. If there are several threads waiting for the resource and resource has freed what thread will be the first to occupy the resource. Will it be FIFO principle?
  3. How the situation will change if threads have different priorities?
  4. If the resource is DataSource object which produces Connection object participating in connection pooling, will it be the same as with the file above?

preguntado el 08 de noviembre de 11 a las 19:11

3 Respuestas

  1. The length of the queue is the number of blocked threads. If you keep creating threads and they all finish being blocked in writing, you'll quickly bring the system to its knees. You should certainly use a thread pool, reuse threads instead of creating them, and block once too many events are in the queue. See Executors.
  2. No, it's not FIFO. The order is undefined. You might want to use a fair ReentrantLock if you want FIFO. But it's more time-consuming tha basic synchronization or an unfair lock.
  3. Platform-dependant, and no deterministic behaviour.
  4. It all depends on the implementation of the datasource. It might use a fair algorithm or simply use a fair lock. Or it might just use synchronization and not be fair. You need to read the doc of the datasource, if sufficiently detailed.

respondido 09 nov., 11:00

From my limited understanding:

1- It can be pretty big, limited by the number of threads the system can have (I think it's limited by max number of threads or some OS native resource limit). I've seen 100+.

2- Yes, threads should gain the resource in a FIFO fashion.

3- No, threads with higher priority might get in line sooner, but after they're in line, their place is fixed.

4- A Datasource couldbehave different, it basically depends on the implementation of the pool. I think Apache's dbcp (as seen in Tomcat) behaves this way, except that there are timeouts that could be fired if the pool can't assign a connection in a set time.

Espero que esto ayude.

respondido 08 nov., 11:23

  1. There is no theoretical limit to the amount of waiting threads with this design.
  2. Unpredictable. The implied spec of java requires no thread ordering for this type of simple lock. The actual order would be dependent on the implementation of the JVM and the host architecure, but you should never depend on this.
  3. This is up to the thread engine, and may or may not have an effect. However, the JVM thread engine will attempt to schedule high priority thread before lower, and will actually preempt lower priority threads already running.
  4. As far as your threads are concerned, both are exactly the same. The threads depend on the synchronization mechanisms to control resource usage, the exact resource should not change their behavior, other than perhaps longer wait times if the connection is slower than the file.

respondido 09 nov., 11:00

you say that threads will occupy the freed resourse randomly, and how to make FIFO in such situation? What thread engine you are talking about? - Maks

Echa un vistazo a QueuedSynchronizers for something that will act like a FIFO. Unfortunately, I have never used that synchro, so I am learning about it as we speak! - 3martini

If you want it FIFO, then write to an in memory queue and read/write to file from that queue, which will reduce contention for the file resource. This is easily accomplished with a BlockingQueue. One thread to read and many write to the queue. - petirrojo

No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.