Monday, May 28, 2012

Definition of 'Zero-Sum Game'

Definition of 'Zero-Sum Game'

A situation in which one participant's gains result only from another participant's equivalent losses. The net change in total wealth among participants is zero; the wealth is just shifted from one to another.

Investopedia explains 'Zero-Sum Game'

Options and future contracts are examples of zero-sum games (excluding costs). For every person who gains on a contract, there is a counter-party who loses. Gambling is also an example of a zero-sum game.

A stock market, however, is not a zero-sum game because wealth can be created in a stock market.

Foundations of Parallel and Distributed Systems

CS273: Foundations of Parallel and Distributed Systems

INSTRUCTORS:
Lecturer: Satish Rao (satishr@cs.berkeley.edu)
Office hours (tentative): Mo: 1:30-2:30, Th: 2-3.
TA: Isabelle Stanton
Office hours (tentative): Tu: 11-12, Fr: 2-3.
Fundamental theoretical issues in designing parallel algorithms and architectures and topics in distributed networks. Shared memory models of parallel computation. Parallel algorithms. Fundamentals of routing for general networks. Algorithms for object location in distributed networks. Partitioning methods. Distributed computing. Game theoretic aspects of distributed agents. Foundations of Game Theory. On-line auctions.

Administration

Homeworks, a project, perhaps a final will determine your grade. The course is more front loaded than other courses, so be sure to get your work done as it is posted.

Project.

Choose a paper or propose a "paper to be" related to the topics we will present in this course. It should have a significant theory/analysis component or suggest such a direction. You should propose to either understand or extend the paper. You should be prepared to present and summarize the contents of the paper including sufficient relevant background. Proposals are due September 12, 2010.

Homework Guidelines

The point of homeworks is to learn the material well. Please do so.

Homeworks/Lecture Notes.

  • Lecture 1. Introduction. Load Balancing. In ps or pdf
  • Lecture 2. Power of two choices. In ps or pdf
  • Lecture 3. Expectation. In ps or pdf
  • Lecture 4. Markov, Chebyshev, and Chernoff/Hoeffding. In ps or pdf
  • Lecture 5. BSP Computing. Linear Relaxation. (Sparse Matrix-Vector multiply. In ps or pdf
  • Lecture 6/7. Planar Separator Theorem. In ps or pdf
  • Lecture 8/9. Hypercube, butterfly, route selection. In ps or pdf
  • Lecture 10/11/12. Scheduling, random priorities, bounded queues. In ps or pdf
  • Lecture 12/13. Random delays, Lovasz Local Lemma, $O(C+D)$ scheduling..(nearly). In ps or pdf
  • Lecture 14. General Path Selection, Linear Programming, Path Selection In ps or pdf
  • Lecture 15. Path Routing. Shortest path with exponential penalties. In ps or pdf
  • Lecture 16/17. Experts, Zero-Sum Games, LP games. In ps or pdf
  • Lecture 18. The PRAM. In ps or pdf
  • Lecture 19/20. The PRAM. In ps or pdf
  • Lecture 20/21. The PRAM: Complexity In ps or pdf
  • Lecture 22. Distributed Coloring. In ps or pdf
  • Lecture 23/24. Matching in NC, Sorting. In ps or pdf
  • Lecture 25. Distributed Computing. In ps or pdf
  • Lecture 26. Graph Partitioning. Pieced together from lectures here , here and here .
  • Lecture 27. Iterative linear systems. Taken from here .
  • Lecture 28. Pieced together from here. Just did Arrow's theorem. Did not go into Algorithmic Mechanism Design.

Some lecture references.

Previous Versions.

  • The 2009 site.
  • The 2006 site.
  • The 2005 273 website.
  • Other resources...

    Multicore - Parallel processing on multiple cores

    Multicore - Parallel processing on multiple cores


    26 Jan 2007 (Updated 18 Sep 2011)
    This package realizes parallel processing on multiple cores/machines.
    File Information
    Description This package provides parallel processing on multiple cores on a single machine or on multiple machines that have access to a common directory.
    If you have multiple function calls that are independent of each other, and you can reformulate your code as
    for k = 1:numel(parameterCell)
      resultCell{k} = myfun(parameterCell{k});
    end
    then, replacing the loop by
    resultCell = startmulticoremaster(@myfun, parameterCell);
    allows you to evaluate your loop in parallel. All you need to do is to start as many additional Matlab sessions/processes as you want slaves to work, and to run
    startmulticoreslave
    in those additional Matlab sessions.
    Everything is programmed in plain and platform-independent Matlab - no toolboxes are used, no compilation of mex-files is necessary.
    Please get started with 1. the documentation in file multicore.html, 2. the help lines of function startmulticoremaster.m and 3. the demo function multicoredemo.m.
    Discuss with other users here: http://groups.yahoo.com/group/multicore_for_matlab
    I have spent many hours to develop this package. If you would like to let me know that you appreciate my work, you can do so by leaving a donation: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=GPUZTN4K63NRY
    Keywords: Parallel processing, distributed computing, multiple cores.
    MATLAB release MATLAB 7.5 (R2007b)
    Tags for This File  
    Everyone's Tags
    Tags I've Applied
    Add New Tags Please login to tag files.
    Comments and Ratings (63)
    28 Jan 2007 Michal Kvasnicka The demo should be more selfdescriptive. Comparison with single core demo running is very important for parallelization impact evaluation.
    28 Jan 2007 lucia Come long due as standard matlab capability
    does it work also with scrits or only with functions ?
    28 Jan 2007 Zhijun Wang Parallel processing on multiple cores in a single machine is very useful!
    29 Jan 2007 Zhijun Wang Corrections:
    I tested the programn to see whether this program can improve parallel processing in a single machine with the following code:
    ______________________
    N=20;
    for m = 1:10
    tic
      for z = 1:N
          testfun(z)
      end
     toc
    end
    __________________________
    Results show the program do not improve anything comparing with my code in a single machine!
    So I rate again
    29 Jan 2007 William Renaud Would it be possible to make this compatible with Octave? Due to licensing restrictions it is difficult for many people to have numerous Matlab instances available.
    30 Jan 2007 Michal Kvasnicka I am not able to reproduce any improvment from sequential to parallel (multicore) realization of the testing demo code.
    I would go as far as to say, that parallel (multi-core) realization is slower then sequential in some cases.
    30 Jan 2007 Michal Kvasnicka Wow!!! After the few hours of hard experimentation with MULTICORE I finally learned how to use this code for parallel running on my four core PC. This is realy good and useful tool for distributed computing!!!
    1. Informational text in the demo must be extended to detailed description how to run "tesfun" in parallel regime.
    2. Some work must be done to minimize interprocess communication overhead, which may be very itensive (25% of the overall load) in some cases.
    Good work!!!
    02 Mar 2007 schena gianni on dual-processor Intel Xeon based machines it halfs calculation time i.e. it cuts by a factor 2 !
    21 May 2007 Darren 3M Brilliant use of the filesystem to share the load. It's not quite 2x increase but on a quick cluster test I got an increase of 4x with 5CPU's so 80% efficient.
    14 Jun 2007 Markus Buehren Yes and no: The slave process should create the directory if it is not existing (I have updated that). However, you can start the slave processes whenever you like! You can also interrupt and restart them while the master is running.
    26 Aug 2007 happy matlabuser Ran it with 8 processors across 6 machines, and it works beautifully. Unfortunately, if you kill one or more processors, the master processor MUST do that job. Since the jobs are long for my problem, it's better to kill everything in that situation and finish everything at the command line. The master starts at the top of the list, and the slaves start at the bottom. When they meet somewhere in the middle, the master will always redo one of the jobs. I don't think the code will be efficient for a very large number of very small jobs (correct me if I'm wrong); so, I recommend making jobs medium length, (total run time)/(10*(# of processors)) for example. For my problem, these are fairly minor concerns. The author did a great job with this code -- easily saving me hours of work -- thank you!
    27 Aug 2007 Kevin Thom Awesome program ... I have run it successfully using anywhere from 4 processors on one machine to 10 processors on 5 machines ... it makes a whole range of computationally expensive projects feasible in MATLAB.
    05 Oct 2007 Huy Bui Multicore works quite well overall. I got a bug though. I make a mistake in parameterCell. The slave processes all die because of that. When I tried to exit all slave processes & start again, the same error causing slave processes dead appears again & again.
    21 Oct 2007 David Brown Works excellently. It would be nice to see this turned into a fully-guided setup to use this.
    07 Dec 2007 Andrea Soldan very useful and it works excellently.
    much more than the distribuited computing toolbox provided by Matlab (which is very hard to use).
    my SO is Linux, and i'm working with 4 workers ( 2 dual-core processors)
    18 Dec 2007 A. S. Thanks, very useful. Synchronization is not optimal (for example, the master shouldn't start working on a task if a slave is already working on it), but still a great program.
    14 Jan 2008 igor scardanzan great , just some difficulties to kill the slave : the process persists and CNTR C does not work . one should await the execution end
    04 Feb 2008 Igor S An option for avoiding the use of the master core is desirable
    Indeed if one slave process terminates or crushes the entiere process continues but if by chance the iteration that crushes is on the master core all the process is compromised
    08 Feb 2008 Markus Buehren > An option for avoiding the use of the
    > master core is desirable
    The option is already existing: You can use the input parameter MAXMASTEREVALUATIONS and set it to zero.
    25 May 2008 Jun Kim This process works very well. For my model estimation, I was able to cut the execution time in a drastic manner. Also Markus was kind and was very responsive to my question.
    30 Jun 2008 uju jbl  
    14 Jul 2008 Robert Turner Brilliant library. Works like a charm
    03 Sep 2008 M H Its absolutely excellent. Am using it now on my quad core machine and am probably going to buy another quad core just to see my models run so quickly. :)
    15 Sep 2008 Bruno Cordani Great!!!
    19 Dec 2008 Janos Rohaly It seems there is the possibility for master and slave to simultaneously evaluate the same set of parameters. For example, if slave starts evaluating a slow process, master can catch up, and there is nothing to prevent it to start the very same computation since slave hasn't generated the result file yet. There is also a bug in setfilesemaphore.m. dirStruct(k).datenum in line 78 should be datenum(dirStruct(k).date).
    21 Dec 2008 Marcio Sales I tested the functions on two dual core machines. I had great gain when paralellizing processing between the processors of dual core machine or between the two machines using a single processor in each. However, I had no significant gain when trying to using both processors on both machines. Is that because the gain of using more processors is being reduced by increased load of data recording when you add more processors/machines? Also, there are times when I get an error in which one of the workers can't delete the temporary files and this seems to happen more frequently when you increase the number of workers. Is anyone having the same issues? My two machines run Vista 64bits.
    07 Jan 2009 Arturo Serrano I got the same problem as Rohaly. When master ends the computation, and there is a slave still working, the master computes the remaining job again, yielding an extra iteration.
    The solution is to set MAXMASTEREVALUATIONS, with all its drawbacks, since i understand that this isn't a bug rather than a problem not knowing if the slave got interrupted.
    BTW, it works like a charm.
    12 Jan 2009 Arturo Serrano  
    24 Jan 2009 Vasilis Kapetanidis Thank you! This works just fine and now I have 100% CPU usage on all 4 cores! By measuring the elapsed time it seems that it runs about 3.4 times faster than with a single matlab doing all the work, so that's about 85% efficient on my quad-core
    now, if only this could run on a single multi-core machine with only one matlab instance running
    25 Jan 2009 Jordi Arnabat Thanks for this great contribution, it's very useful.
    Correction:
    I've used in under a grid of computers running different OS: GNU/Linux, Mac and Windows (XP). When the shared folder is on a network computer (not mapped to a local drive, for example: \\servername\sharedfolder); Windows systems fail trying to delete the semaphores, causing the master process run forever.
    The solution I found is to slightly modify compsep.m and concatpath.m as follows:
    _______________________________________________________
    function str = chompsep(str)
    unix_sep = '/';
    pc_sep = '\';
    if isunix && str(1)==pc_sep
        str = strrep(str, pc_sep, unix_sep);
    elseif ispc && str(1)==unix_sep
        str = strrep(str, unix_sep, pc_sep);
    end
    if ~isempty(str) && (str(end) == unix_sep || str(end) == pc_sep)
    str(end) = '';
    end
    _______________________________________________________
    function str = concatpath(varargin)
    unix_sep = '/';
    pc_sep = '\';
    str = '';
    for n=1:nargin
    curStr = varargin{n};
    str = fullfile(str, chompsep(curStr));
    end
    if isunix && str(1)==pc_sep
        str = strrep(str, pc_sep, unix_sep);
    elseif ispc && str(1)==unix_sep
        str = strrep(str, unix_sep, pc_sep);
    end
    30 Jan 2009 Andrew Scott  
    05 Feb 2009 Moody I've been using multicore for a while now and its absolutely excellent. I'm running on 5 dual xeon x5460 as well as a couple of quad core boxes.
    I was wondering if anyone compared performance of this toolbox with the parfor parallel computing matlab toolbox. Are they comparable?
    I believe I'm bottlenecked now due to the hard disk I/O, so I was looking at the in memory possibilities of this or potentially upgrading my hd's to solid state to reduce the overhead.
    BTW, I also tried precreating all the mat files once instead of doing multiple loops to reduce the I/O. Unfortunately, that didn't help as much as I hoped.
    Don't get me wrong though, this is much, much, much faster than single threading, but as always we need to keep pushing :).
    10 Feb 2009 Markus Buehren I have opened a discussion group for the Multicore package on Yahoo. Please join and discuss with other users!
    http://groups.yahoo.com/group/multicore_for_matlab/
    18 Feb 2009 Richard Crozier Fantastic program, and particularly suited to my work with genetic algorithms. There is one mnor error I've noticed though. In startmulticoreslave, if you activate debug mode you reach the following line (77):
    fileNr = str2double(regexptokens(parameterFileName,...
    'parameters_\d+_(\d+)\.mat'));
    But there is no regexptokens function, at least there isn't in R2007a or R2007b or R2008a. My solution is to replace this with the following lines (although I'm sure someone could come up with something more robust and/or elegent).
    fileNrCell = regexp(parameterFileName,'parameters_\d+_(\d+)\.mat', 'tokens');
    fileNr = str2double(fileNrCell{1});
    The program is excellent though, thanks again!
    02 Apr 2009 Richard Brilliant tool! Great being able to sit back and watch a progress bar sliding along as a room full of computers gets to work doing your simulation.
    I have a question though - what dimensions are typically used for parameterCell? I've tried doing some large multidimensional runs (e.g. 4x150x10) and things seemed to grind to a halt - I'm still looking into it, just wondered if the dimensions I'm using are typical or too large.
    Cheers. Great work,
    Richard.
    02 Jun 2009 dpb10  
    03 Jun 2009 Thomas Great tool!
    23 Jul 2009 Nir Great Work !
    Speeds up my work more than twice on a quad computer. Not much change had to be done in order to use it.
    Thanks
    Nir
    24 Aug 2009 German Hello, when i am running the multicoredemo, only the master is working, but not the slave. What am i doing wrong?
    multithreading unabled/ is set to one core.
    I use a duo core processor with windows vista and Matlab 7.8.0 R2009a 64-bit.
    Thank you very much.
    25 Aug 2009 German sorry just missed to start a second matlab session with "startmulticoreslave".
    Great code.
    16 Sep 2009 Amir I have little bit of problem running the master and slave processes over the network. I am not sure how to share the folder over a linux network!
    Anybody can help?
    28 Sep 2009 Johannes Wunsch Awesome tool, I use it for fitting a computationally expensive financial model and it works just great. Thanks a lot for publishing it!! Johannes
    24 Oct 2009 DAdler Thank you very much for this great tool! I started using your code a few of months ago, and I must say it saved me lots of hours of work.
    04 Nov 2009 Karl  
    04 Nov 2009 Karl Hi Markus,
    first of all -- you've developed a great tool that helps me a lot with my simulations in the field of audio signal processing. Seeing my dual quad core at 100% (instead of 13%) load warms my heart :).
    Well, I'm not sure whether I'm getting something wrong here, but I had some problems when the function that is executed by the multicoremaster() (and the slaves) has more than one return value. It seems that in this case all but the first return values get lost. I applied a little trick that I found out about some time ago to solve this problem:
    Everytime that feval() is called (in startmulticoremaster() and startmulticoreslave() ) I added something like this:
    N_returnValues = nargout(functionHandleCell{k});
    clear('returnValue'); % this is dirty! The next call doesn't work
         % without clearing "returnValue" beforehand if
         % N_returnValues==1. If it's greater->no
         % problem (even without clearing
         % "returnValue")
    [returnValue{1:N_returnValues}] = feval(functionHandleCell{k}, parameterCell{k}{:}); % ha!
    resultCell{k} = returnValue;
    Doing so, the resultsCell that is returned from startmulticoremaster() is always a cell -- even if the called function has only one return value...
    I hope this is of any value to anybody and that I'm not causing trouble by posting this hack :). I'm always open to learn a better solution....
    Cheers, and thanks again for Multicore!
        Karl
    11 Dec 2009 Robert Stead I'm having problems with the lasterror function in this code. There appear to be several instances where the lasterror function is passed a string 'reset' as an input argument, but the function lasterror is only defined for inputs of structure type. This causes errors at several points in the code, and I am unable to run the multicoredemo routine. I'm sure this is something I'm doing wrong, but I'd be grateful if someone could help me!
    26 Sep 2010 Hamid Badi  
    26 Sep 2010 Hamid Badi Great
    04 Oct 2010 Torfinn Thank you for these tools, they are vastly useful and will save me much time.
    11 Dec 2010 Johnny Ta awesome. you're my savior. the code works like charm!
    23 Feb 2011 Darin McCoy No improvement running the multicoredemo.m file
    Elapsed time running STARTMULTICOREMASTER: 21.53 seconds.
    Elapsed time running STARTMULTICOREMASTER: 21.67 seconds.
    Elapsed time running STARTMULTICOREMASTER: 21.31 seconds.
    Elapsed time running STARTMULTICOREMASTER: 21.31 seconds.
    Elapsed time running STARTMULTICOREMASTER: 21.30 seconds.
    Elapsed time without slave support: 21.14 seconds.
    Elapsed time running TESTFUN directly: 20.00 seconds.
    25 Feb 2011 Darin McCoy nevermind my previous comment.....i didnt read the instructions :)
    5 stars for the m file and 6 stars for customer service. Thanks Markus!
    12 Jun 2011 Xinghua Lou Hi Markus,
    Great work!
    I may have one suggestion: the file I/O becomes a bottleneck in my application since saving the meta-data of a task (large image sequences) costs almost as much time as processing the task. Maybe it helps a lot if the file I/O can be replace by shared memory functions and there is a new Matlab library for use: http://www.mathworks.com/matlabcentral/fileexchange/28572-sharedmatrix. I think it is a perfect complement to your library.
    Best,
    Xinghua
    16 Jun 2011 Giovanni Bracco  
    16 Jun 2011 Giovanni Bracco Simply wonderful!
    I have an optimization problem -including several launches of a Simulink model- running on a single core in a little more than 30 hours. By reformulating the problem as required by the Marcus' scripts (half a day work) I'm currently using five Windows PCs with a total of 14 cores and carrying out the simulation in 3.1 hours.
    Thank you!!!
    29 Jul 2011 Christopher Carr  
    29 Jul 2011 Christopher Carr  
    29 Jul 2011 Christopher Carr  
    31 Oct 2011 Jason D  
    19 Nov 2011 laoya The tool is really powerful. I want to know if we can develop a similar tool by Fortran or C language. There are two reasons:
    1) since every master and slave program need a matlab window, it is too expensive to buy multiple licenses of Matlab to run on different machines;
    2) the slave program is launched by matlab, which will also need more than 100 MB memory, however, maybe the slave program only need to launch another external execute program with different parameters. Write a pure master/pure program will decrease the memory usage greatly.
    Thanks,
    Zhanhgong Tang
    28 Nov 2011 Zhanhong Great Package! Definitely useful for multicore CPUs. I have a 6 core AMD. It's such a pain if MATLAB can't run parallel.
    I have a demo tip for newbies:
    If you want to see the effect, make sure your function have to run at least several times on the input cell. If your function only has one input, all the slave sessions will be doing nothing because the function is already running on master. You'd probably want to make your input into segments to make all the slaves run simultaneously.
    28 Nov 2011 Rohit Verma Can I use it for running multiple processes accessing the same function using different parameter cell. Please note that the function takes in image input in each call. Will that be a problem ?
    Thanks very nice code
    28 Nov 2011 Rohit Verma I tried running my function simultaneously with 3 different datasets together, but the slave processes doesnt do anything and the time is the same as if I am running without it.
    Please login to add a comment or rating.
    Updates
    26 Jan 2007 bad line breaks in description...
    29 Jan 2007 Informational text added to demo, improvements in file access organization.
    21 May 2007 Updated info due to new Matlab multicore functionality.
    29 May 2007 Another note about multithreading
    30 May 2007 Update of documentation contained in zip file, no changes to source code.
    15 Jun 2007 Slave process will now create the temporary directory if it is not existing.
    22 Jun 2007 There was a subfunction missing (which is only executed after a write error).
    22 Jun 2007 Yet another small update.
    25 Sep 2007 Improved support for small numbers of very long function evaluations.
    12 Oct 2007 A file was missing - sorry.
    14 Nov 2007 Update of contact information in documentation.
    14 Nov 2007 Old e-mail address removed from help comments of m-files.
    07 Dec 2007 A subfunction that is only executed on certain systems was missing.
    03 Nov 2008 Subfunction datenum2 was not needed.
    15 Dec 2008 Semaphore stuff improved.
    17 Dec 2008 Forgot to include file chompsep.m
    21 Dec 2008 Semaphore mechanism improved.
    07 Jan 2009 Introduced parameter EVALSATONCE which causes the multicore package to do several function evaluations after each other before saving/loading and thus reducing the communication overhead. Demo function MULTICOREDEMO heavily commented.
    18 Jan 2009 I have nearly re-written both master and slave in order to make the package even more robust and to reduce the overhead for inter-process communication.
    27 Jan 2009 Another change to the semaphore mechanism.
    22 Feb 2009 File regexptokens.m added.
    Dicussion group created: http://groups.yahoo.com/group/multicore_for_matlab
    09 Mar 2009 If a slave is killed during working on a job, the master will now generate the parameter file of that job again instead of working on the file himself. This will increase performance in certain situations.
    17 Mar 2009 Added an optional waitbar.
    20 Mar 2009 Added estimation of time left in waitbar.
    05 Apr 2009 Using system-dependent file separators in paths again. Waitbar shows progress during parameter file generation now.
    07 Apr 2009 Two bugs fixed, one regarding the waitbar, one regarding the semaphore mechanism.
    10 Apr 2009 In each multicore run, "clear functions" is now called once to ensure that changes to m-files take effect.
    12 Apr 2009 Call to "clear functions" now in master and slaves, bug fixed.
    13 Apr 2009 File displayerrorstruct.m was missing.
    15 Apr 2009 Bug fixed.
    17 Jun 2009 Estimation of time left changed, post-processing function introduced.
    19 Jun 2009 Structure being passed to post-processing function changed (still undocumented feature)
    25 Aug 2009 Small changes to documentation and gethostname.m
    10 Mar 2010 Bugfix.
    11 Apr 2011 Only E-mail changed in html document.
    04 Jul 2011 Description modified to make it more concise.
    04 Jul 2011 Links in help lines corrected.
    05 Jul 2011 Description changed again.
    29 Aug 2011 Overhead resulting from expanding the function handle cell reduced.
    29 Aug 2011 Typo fixed.
    30 Aug 2011 Typo fixed.
    12 Sep 2011 Bugfix: calling startmulticoremaster.m without settings works now.
    18 Sep 2011 New features:
    1. Slave settings can be set via command line argument.
    2. Slave Matlab process can be quit after a given time in seconds.

    Can MATLAB make use of multi-core CPU

    Can MATLAB make use of multi-core CPU

    I've got a core 2 duo PC with Windows XP and MATLAB R2006a installed.
    I want to know if MATLAB will take advantage of the two cores of CPU
    to double the computational speed (in the ideal situation) if I do
    not use any parallel programming technique in my function.
    In detail, most of the computing time of my script is spent on the
    2-D convolution which is iterated 5000 times. The date in each
    iteration is independent so that parallel computing technqiue can
    help.

    Well, I don't expect MATLAB to be smart enough to optimize my script.
    So my second question is how to do parallel programming in MATLAB.
    Thanks.

    Andy.
    • "

      Matlab

      " Open Questions
    • Can MATLAB Java Builder compile user classes?
    • Hi, everyoneI am trying to use Matlab Java Builder to compile Matlab applicationinto Java codes. I know that the Java Builder works well for MatlabStruct. But my Matlab application has a lot of user classes. I amwondering whether Java Builder will work for the user classes.Thanks a lot.Yunde...
    • Can Matlab help Excel data entry?
    • Hi all,Can I enter data interactively and dynamically?To give an example:Approach 1: Currently, when I enter grades for students, I have to findhis/her name first, after finding that row, I have to scroll the row to thefar right, since there are so many columns already there. Let's supposet...
    • can matlab gui do this job
    • hi ,everyonei want to know if matlab GUI can store and display some data achievedin functionsthat is,when i execute the GUI , some data is created ,then i want some place in the GUI to display the datacan matlab GUI do this?thanks...
    • Can Matlab fully control Zemax? I bumped into some trouble in some features
    • Hi,I would like to access Zemax settings via Matlab.I am aware of usingzGetTextFile(TextFileName, AnalysisType, SettingsFileName,0);However still this dictates you the need to set thesettings in Zemax, save your own relavant CFG file, thenuse it when using zGetTextFile as your SettingsFileName.I...
    • can matlab from vb
    • i got a problem ,i want to call matlab from vb(is it need vb.net or vb6.0?),my parameters were generated by vb ,and i want to put these parameters to matlab ,so i can draw three-dimensional graph?i want to know how can i complete ?thanks everyone ....
    • Can Matlab extract variables from a MAT file t
    • I haven't, and probably never will. If i get one, it'll most likely bein chemistry. However, he wasn't asking me the question was he, he wasasking the group. The comparrison might have been exagerated slightly.As far at MATLAB experience goes, I'm near the bottom personally,...
    • Can Matlab extract variables from a MAT file t
    • Christopher, thank you, I know. My question is 'would this work?'Would Matlab be smart enough to pick out only variable1,..,variableN, or insist on loading the whole thing, and then picking outvariable1,.., variableN?...
    • Can Matlab extract variables from a MAT file
    • Suppose that we try to load into Matlab workspace a MAT file that istoo large given memory constraints. (Presumably, the file was createdon a different computer). I wonder if it would still be possible toextract selected variables from the file, using appropriate syntax ofLOAD. Does anybody know...
    • Can Matlab extract variables from a MAT
    • Dimitri Shvorob wrote:> Suppose that we try to load into Matlab workspace a MAT file that is> too large given memory constraints. (Presumably, the file was created> on a different computer). I wonder if it would still be possible to> extract selected variables from the file, using ap...
    • Can Matlab extract variables from a MAT
    • berniecr.programming.itags.org.clarkson.edu wrote:> It's like asking a nobel winning> physicist> what Newton's first law of motion is.When did you win the Nobel prize? I can't find you in the list ...Heinrich...
    • Post a Comment Now.
    • Comments
    • Tue, 29 Apr 2008 13:47:00 GMT(1)
    • Andy wrote:
      >
      > I've got a core 2 duo PC with Windows XP and MATLAB R2006a
      > installed.
      > I want to know if MATLAB will take advantage of the two cores of
      > CPU
      > to double the computational speed (in the ideal situation) if I do
      > not use any parallel programming technique in my function.
      > In detail, most of the computing time of my script is spent on the
      > 2-D convolution which is iterated 5000 times. The date in each
      > iteration is independent so that parallel computing technqiue can
      > help.
      > Well, I don't expect MATLAB to be smart enough to optimize my
      > script.
      > So my second question is how to do parallel programming in MATLAB.
      > Thanks.
      > Andy.


      Hi Andy,

      MATLAB is not a multithreaded application so it can not make use of
      dual core. But there is a toolbox, distibuted computing toolbox that
      makes use of multi-processor systems though. But Mathwork is in the
      process of supporting multicore systems as well. I hope this info may
      help you a bit.

      Cheers !!

      V
    • Tue, 29 Apr 2008 13:48:00 GMT(2)
    • I am using the processing power of multiple cores, or even of
      multiple computers, without using the distributed computing toolbox.
      What my master process does is to save the data needed for the
      computation task into simple data files. One or several slave
      processes load these files, do the computation and save the results
      into files. Then the master process can load the results.

      Saving and loading is very quick, so the organizational overhead
      should be neglectable, if your processing task needs several seconds
      or more.

      If you like more advice on how I realized this, let me know.

      Markus
    • Tue, 29 Apr 2008 13:49:00 GMT(3)
    • I'd like to know your solution.

      Markus wrote:
      >
      > I am using the processing power of multiple cores, or even of
      > multiple computers, without using the distributed computing
      > toolbox.
      > What my master process does is to save the data needed for the
      > computation task into simple data files. One or several slave
      > processes load these files, do the computation and save the results
      > into files. Then the master process can load the results.
      > Saving and loading is very quick, so the organizational overhead
      > should be neglectable, if your processing task needs several
      > seconds
      > or more.
      > If you like more advice on how I realized this, let me know.
      > Markus
    • Tue, 29 Apr 2008 13:50:00 GMT(4)
    • Andy wrote:
      >
      > I've got a core 2 duo PC with Windows XP and MATLAB R2006a
      > installed.

      [snip wants matlab to utilize core 2 duo]

      While not in the way you describe, the pre-release R2007a has some
      multi-thread, multi-processor support. On the computations I test,
      where multi-thread/proc is enabled, I did see a 3x-4x improvement in
      speed. I have a 4 processor (8 core) computer to work with.

      A Friend.
    • Tue, 29 Apr 2008 13:51:00 GMT(5)
    • Well, after a brief look at the distributed computing toolbox at
      mathworks website, it gives me the impression that this toolbox is
      designed for computing cluster, not for multi-core CPU on a single
      machine. So it may not help me.

      Vijay Singh wrote:
      >
      > Andy wrote:
      of
      > do
      > the
      can
      > MATLAB.
      > Hi Andy,
      > MATLAB is not a multithreaded application so it can not make use of
      > dual core. But there is a toolbox, distibuted computing toolbox
      > that
      > makes use of multi-processor systems though. But Mathwork is in the
      > process of supporting multicore systems as well. I hope this info
      > may
      > help you a bit.
      > Cheers !!
      > V
    • Tue, 29 Apr 2008 13:52:00 GMT(6)
    • On Wed, 24 Jan 2007 22:21:47 -0500, Andy wrote:

      > I've got a core 2 duo PC with Windows XP and MATLAB R2006a installed.
      > I want to know if MATLAB will take advantage of the two cores of CPU
      > to double the computational speed (in the ideal situation) if I do
      > not use any parallel programming technique in my function.
      > In detail, most of the computing time of my script is spent on the
      > 2-D convolution which is iterated 5000 times. The date in each
      > iteration is independent so that parallel computing technqiue can
      > help.
      > Well, I don't expect MATLAB to be smart enough to optimize my script.
      > So my second question is how to do parallel programming in MATLAB.
      > Thanks.
      > Andy.


      In addition to what others have said, there is more information available
      if you search this newsgroup. This question is quite frequently asked, so
      there are numerous previous posts to look up.

      Dan
    • Tue, 29 Apr 2008 13:53:00 GMT(7)
    • I have put my parallelization code into a small toolbox that you can
      download here:

      <[url]http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=13775[/u
      rl]>

      This is only one of similiar solutions, but maybe someone can use it
      for his work. It does not use any C-programmed stuff that has to be
      compiled before one can start to work.

      Markus
    • Post a Comment Now.
    • Maybe you're interest in these...
    • File Search!
    • [Active Server Page (ASP)] i change to "c:\" only search root filei want search all file in volume by my setall file in root,subfolder....thanks"JakeDAHS" <jskiba99.programming.itags.org.gmail.com>'?:1150168428.243545.115520.programming.itags.org.c74g2000cwc.googlegroups.com...>> Joey Chen wrote: >> Change sFolder = "C:\temp1" to directorty t......
    • Dynamic Data Exchange Exception
    • [Development Tools] Hi All I am working with 'Dynamic Data Exchange' to execute some Microsoft word Commands.first i open the microsoft word using its exe and then i use dde.execute to execute my command but get the exception ORA-016552. Please tell me what i should do ?Thanks in AdvanceZahid......
    • handling null values
    • [Coldfusion] hello cfml developers ...In my application I need to add two array values inside a loop..but the problem is whlie adding ..in some cases the legth of the arrays r not same..as a result Iam getting the following error.."The element at position 4 of dimension 1, of array variable "FTR_MAN_AR......
    • Application locking up at Call by Reference
    • [LabVIEW] I think the problem may be an interaction with TestStand, possibly a conflict with different threads. One question that I can't find the answer to is: what execution system do VIs run in when they are called by reference? For example, ifTopLevelVI.vi calls SubVI.vi by reference, if TopLevel......
    • C & ELF segment
    • [C & C++] Hi,I hope this question belongs to this group.I am studying book <<Expert C Programming>and in the chapter ofrun time data structure, it mentions that BSS segment only stores thesize of the un-initialized data, it does not have the data images, soit does not take up any actual space in......
    • regexp problem
    • [TCL] HiI am trying to parse a verilog (Hardware description language) fileusing TCL,and I have some problems with an implementation of a non-greedy regexp:Here's the string I am looking in:set a {module a..inst1 abc..endmodulemodule b..inst2 xyz..endmodule}I would like to find the name of t......
    • Watir & regexp
    • [Ruby] We want to test a value given in a text field with a regularexpression. We are trying to do it this way:.programming.itags.org.browser.text_field(:name,"vareeier").set("123456789").programming.itags.org.browser.text_field(:name,"vareeier").value.should_equal(/[0-9]{9}/)This is not working for us. We also tried using.programming.itags.org.browser.text_fiel......
    • How to display HTML Entities in Flash
    • [Flash] Hello, I am passing XML file as input to a Flash File. This XML Data contains some HTML entities, so these entities are not correctly displayed on Flash File.Here is xml code in short <?xml version="1.0" encoding="utf-8" ?> - <root> - <item id="1"> - <name> - <......
    • Can't locate object method "prepare"
    • [Perl] Hi All,I am writing a subroutine which inserts the values supplied in perlscript.abc.pm***********package abc;use DBI;use Apache::DBI;use DBD::Pg;use CGI;#use strict;#use Exporter;#our .programming.itags.org.ISA = qw(Exporter AutoLoader);#our .programming.itags.org.EXPORT_OK = qw(Login);my $databasehandle = ();sub new($){my ($self,$u......
    • brace grouping vs. numbering
    • [Tex] Hi, all,I cannot find a way to separately number the equations that have agrouping brace on the left.E.g.:\begin{equation}\left\{\begin{aligned}1=1\cdot1\\0=1 - 1\end{aligned}\right.\end{equation}what I want is to get 2 separate numbers for the 2 equations. CurrentlyI ca......

    Semidefinite programming

    Semidefinite programming

    From Wikipedia, the free encyclopedia
    Jump to: navigation, search
    Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
    Semidefinite programming is a relatively new field of optimization which is of growing interest for several reasons. Many practical problems in operations research and combinatorial optimization can be modeled or approximated as semidefinite programming problems. In automatic control theory, SDP's are used in the context of linear matrix inequalities. SDPs are in fact a special case of cone programming and can be efficiently solved by interior point methods. All linear programs can be expressed as SDPs, and via hierarchies of SDPs the solutions of polynomial optimization problems can be approximated. Finally, semidefinite programming has been used in the optimization of complex systems.

    Contents

    Motivation and definition

    Initial motivation

    A linear programming problem is one in which we wish to maximize or minimize a linear objective function of real variables over a polyhedron. In semidefinite programming, we instead use real-valued vectors and are allowed to take the dot product of vectors. Specifically, a general semidefinite programming problem can be defined as any mathematical programming problem of the form
    \begin{array}{rl}
{\displaystyle \min_{x^1, \ldots, x^n \in \mathbb{R}^n}} & {\displaystyle \sum_{i,j \in [n]} c_{i,j} (x^i \cdot x^j)} \\
\text{subject to} & {\displaystyle \sum_{i,j \in [n]} a_{i,j,k} (x^i \cdot x^j) \leq b_k \qquad \forall k}. \\
\end{array}

    Equivalent formulations

    An n \times n matrix M is said to be positive semidefinite if it is the gramian matrix of some vectors (ie. if there exist vectors x^1, \ldots, x^n such that m_{i,j}=x^i \cdot x^j for all i,j). If this is the case, we denote this as M \succeq 0. Note that there are several other equivalent definitions of being positive semidefinite.
    Denote by \mathbb{S}^n the space of all real symmetric matrices. The space is equipped with the inner product (where {\rm tr} denotes the trace)   \langle A,B\rangle_{\mathbb{S}^n} = {\rm tr}(A^T B) = \sum_{i=1,j=1}^n
  A_{ij}B_{ij}.
    We can rewrite the mathematical program given in the previous section equivalently as
    \begin{array}{rl}
{\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle_{\mathbb{S}^n} \\
\text{subject to} & \langle A_k, X \rangle_{\mathbb{S}^n} \leq b_k, \quad k = 1,\ldots,m \\
& X \succeq 0
\end{array}
    where entry i,j in C is given by c_{i,j} from the previous section and A_k is an n \times n matrix having i,jth entry a_{i,j,k} from the previous section.
    Note that if we add slack variables appropriately, this SDP can be converted to one of the form
    \begin{array}{rl}
{\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle_{\mathbb{S}^n} \\
\text{subject to} & \langle A_i, X \rangle_{\mathbb{S}^n} = b_i, \quad i = 1,\ldots,m \\
& X \succeq 0.
\end{array}
    For convenience, an SDP may be specified in a slightly different, but equivalent form. For example, linear expressions involving nonnegative scalar variables may be added to the program specification. This remains an SDP because each variable can be incorporated into the matrix X as a diagonal entry (X_{ii} for some i). To ensure that X_{ii} \geq 0, constraints X_{ij} = 0 can be added for all j \neq i. As another example, note that for any positive semidefinite matrix X, there exists a set of vectors \{ v_i \} such that the i, j entry of X is X_{ij} = (v_i, v_j) the scalar product of v_i and v_j. Therefore, SDPs are often formulated in terms of linear expressions on scalar products of vectors. Given the solution to the SDP in the standard form, the vectors \{ v_i \} can be recovered in O(n^3) time (e.g., by using an incomplete Cholesky decomposition of X).

    Duality theory

    Definitions

    Analogously to linear programming, given a general SDP of the form
    \begin{array}{rl}
{\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle_{\mathbb{S}^n} \\
\text{subject to} & \langle A_i, X \rangle_{\mathbb{S}^n} = b_i, \quad i = 1,\ldots,m \\
& X \succeq 0
\end{array}
    (the primal problem or P-SDP), we define the dual semidefinite program (D-SDP) as
    \begin{array}{rl}
{\displaystyle\max_{y \in \mathbb{R}^m}} & \langle b, y \rangle_{\mathbb{R}^m} \\
\text{subject to} & {\displaystyle\sum_{i=1}^m} y_i A_i \preceq C
\end{array}
    where for any two matrices P and Q, P \succeq Q means P-Q \succeq 0.

    Weak duality

    The weak duality theorem states that the value of the primal SDP is at least the value of the dual SDP. Therefore, any feasible solution to the dual SDP lower-bounds the primal SDP value, and conversely, any feasible solution to the primal SDP upper-bounds the dual SDP value. This is because
    \langle C, X \rangle - \langle b, y \rangle
= \langle C, X \rangle - \sum_{i=1}^m y_i b_i
= \langle C, X \rangle - \sum_{i=1}^m y_i \langle A_i, X \rangle
= \langle C - \sum_{i=1}^m y_i A_i, X \rangle
\geq 0,
    where the last inequality is because both matrices are positive semidefinite.

    Strong duality

    Under a condition known as Slater's condition, the value of the primal and dual SDPs are equal. This is known as strong duality. Unlike for linear programs, however, not every SDP satisfies strong duality; in general, the value of the dual SDP may lie strictly below the value of the primal.
    (i) Suppose the primal problem (P-SDP) is bounded below and strictly feasible (i.e., there exists X_0\in\mathbb{S}^n, X_0\succ 0 such that \langle
A_i,X_0\rangle_{\mathbb{S}^n} = b_i, i=1,\ldots,m). Then there is an optimal solution y^* to (D-SDP) and
    \langle C,X^*\rangle_{\mathbb{S}^n} = \langle b,y^*\rangle_{\R^m}.
    (ii) Suppose the dual problem (D-SDP) is bounded above and strictly feasible (i.e., \sum_{i=1}^m (y_0)_i A_i
\prec C for some y_0\in\R^m). Then there is an optimal solution X^* to (P-SDP) and the equality from (i) holds.

    Examples

    Example 1

    Consider three random variables A, B, and C. By definition, their correlation coefficients \rho_{AB}, \ \rho_{AC}, \rho_{BC} are valid if and only if
    \begin{pmatrix}
  1 & \rho_{AB} & \rho_{AC} \\
  \rho_{AB} & 1 & \rho_{BC} \\
  \rho_{AC} & \rho_{BC} & 1
\end{pmatrix} \succeq 0
    Suppose that we know from some prior knowledge (empirical results of an experiment, for example) that -0.2 \leq \rho_{AB} \leq -0.1 and 0.4 \leq \rho_{BC} \leq 0.5. The problem of determining the smallest and largest values that \rho_{AC} \ can take is given by:
    minimize/maximize x_{13}
    subject to
    -0.2 \leq x_{12} \leq -0.1
    0.4 \leq x_{23} \leq 0.5
    x_{11} = x_{22} = x_{33} = 1 \
    \begin{pmatrix}
  1 & x_{12} & x_{13} \\
  x_{12} & 1 & x_{23} \\
  x_{13} & x_{23} & 1
\end{pmatrix} \succeq 0
    we set \rho_{AB} = x_{12}, \ \rho_{AC} = x_{13}, \ \rho_{BC} = x_{23} to obtain the answer. This can be formulated by an SDP. We handle the inequality constraints by augmenting the variable matrix and introducing slack variables, for example
    \mathrm{tr}\left(\left(\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\end{array}\right)\cdot\left(\begin{array}{cccccc}
1 & x_{12} & x_{13} & 0 & 0 & 0\\
x_{12} & 1 & x_{23} & 0 & 0 & 0\\
x_{13} & x_{23} & 1 & 0 & 0 & 0\\
0 & 0 & 0 & s_{1} & 0 & 0\\
0 & 0 & 0 & 0 & s_{2} & 0\\
0 & 0 & 0 & 0 & 0 & s_{3}\end{array}\right)\right)=x_{12} + s_{1}=-0.1
    Solving this SDP gives the minimum and maximum values of \rho_{AC} = x_{13} \ as -0.978 and  0.872 respectively.

    Example 2

    Consider the problem
    minimize \frac{(c^T x)^2}{d^Tx}
    subject to Ax +b\geq 0
    where we assume that d^Tx>0 whenever Ax+b\geq 0.
    Introducing an auxiliary variable t the problem can be reformulated:
    minimize t
    subject to Ax+b\geq 0, \, \frac{(c^T x)^2}{d^Tx}\leq t
    In this formulation, the objective is a linear function of the variables x,t.
    The first restriction can be written as
    \textbf{diag}(Ax+b)\geq 0
    where the matrix \textbf{diag}(Ax+b) is the square matrix with values in the diagonal equal to the elements of the vector Ax+b.
    The second restriction can be written as
    td^Tx-(c^Tx)^2\geq 0
    or equivalently
    det\underbrace{\left[\begin{array}{cc}t&c^Tx\\c^Tx&d^Tx\end{array}\right]}_{D}\geq 0
    Thus D \succeq 0.
    The semidefinite program associated with this problem is
    minimize t
    subject to \left[\begin{array}{ccc}\textbf{diag}(Ax+b)&0&0\\0&t&c^Tx\\0&c^Tx&d^Tx\end{array}\right] \succeq 0

    Example 3 (Goemans-Williamson MAX CUT approximation algorithm)

    Semidefinite programs are important tools for developing approximation algorithms for NP-hard maximization problems. The first approximation algorithm based on an SDP is due to Goemans and Williamson (JACM, 1995). They studied the MAX CUT problem: Given a graph G = (V, E), output a partition of the vertices V so as to maximize the number of edges crossing from one side to the other. This problem can be expressed as an integer quadratic program:
    Maximize \sum_{(i,j) \in E} \frac{1-v_{i} v_{j}}{2}, such that each v_i\in\{1,-1\}.
    Unless P = NP, we cannot solve this maximization problem efficiently. However, Goemans and Williamson observed a general three-step procedure for attacking this sort of problem:
    1. Relax the integer quadratic program into an SDP.
    2. Solve the SDP (to within an arbitrarily small additive error \epsilon).
    3. Round the SDP solution to obtain an approximate solution to the original integer quadratic program.
    For MAX CUT, the most natural relaxation is
    \max \sum_{(i,j) \in E} \frac{1-\langle v_{i}, v_{j}\rangle}{2}, such that \lVert v_i\rVert^2 = 1, where the maximization is over vectors \{v_i\} instead of integer scalars.
    This is an SDP because the objective function and constraints are all linear functions of vector inner products. Solving the SDP gives a set of unit vectors in \mathbf{R^n}; since the vectors are not required to be collinear, the value of this relaxed program can only be higher than the value of the original quadratic integer program. Finally, a rounding procedure is needed to obtain a partition. Goemans and Williamson simply choose a uniformly random hyperplane through the origin and divide the vertices according to which side of the hyperplane the corresponding vectors lie. Straightforward analysis shows that this procedure achieves an expected approximation ratio (performance guarantee) of 0.87856 - ε. (The expected value of the cut is the sum over edges of the probability that the edge is cut, which is proportional to the angle \cos^{-1}\langle v_{i}, v_{j}\rangle between the vectors at the endpoints of the edge over \pi. Comparing this probability to (1-\langle v_{i}, v_{j}\rangle)/{2}, in expectation the ratio is always at least 0.87856.) Assuming the Unique Games Conjecture, it can be shown that this approximation ratio is essentially optimal.
    Since the original paper of Goemans and Williamson, SDPs have been applied to develop numerous approximation algorithms. Recently, Prasad Raghavendra has developed a general framework for constraint satisfaction problems based on the Unique Games Conjecture.[1]

    Algorithms

    There are several types of algorithms for solving SDPs. These algorithms output the value of the SDP up to an additive error \epsilon in time that is polynomial in the program description size and \log (1/\epsilon).

    Interior point methods

    Most codes are based on interior point methods (CSDP, SeDuMi, SDPT3, DSDP, SDPA). Robust and efficient for general linear SDP problems. Restricted by the fact that the algorithms are second-order methods and need to store and factorize a large (and often dense) matrix.

    Bundle method

    The code ConicBundle formulates the SDP problem as a nonsmooth optimization problem and solves it by the Spectral Bundle method of nonsmooth optimization. This approach is very efficient for a special class of linear SDP problems.

    Other

    Algorithms based on augmented Lagrangian method (PENSDP) are similar in behavior to the interior point methods and can be specialized to some very large scale problems. Other algorithms use low-rank information and reformulation of the SDP as a nonlinear programming problem (SPDLR).

    Software

    The following codes are available for SDP:
    SDPA, CSDP, SDPT3, SeDuMi, DSDP, PENSDP, SDPLR, ConicBundle
    SeDuMi runs on MATLAB and uses the Self-Dual method for solving general convex optimization problems.

    Applications

    Semidefinite programming has been applied to find approximate solutions to combinatorial optimization problems, such as the solution of the max cut problem with an approximation ratio of 0.87856. SDPs are also used in geometry to determine tensegrity graphs, and arise in control theory as LMIs.