How to handle optimization problems when optimization variable is matrix?












1












$begingroup$


Suppose we have the following optimization problem



$$
min_{0preceq M preceq I} y^TMy
$$

where $y in mathbb{R}^n$ and $M in mathbb{R}^{n times n}$ is a positive semi-definite matrix. Notice that the optimization variable is matrix.




Is there any algebraically way to handle this in terms of $M$?




When we write it as the standard form we have the following



$$
min y^TMy
$$

$$
text{s.t.},,,,,, {g_1(M)=-M preceq 0 }
$$

$$
text{s.t.},,,,,, {g_2(M)=I-M preceq 0 }
$$

which is a matrix inequality. If it were vector, it was doable but what would we do when they are in matrix form.



I want to write the first order optimality condition using Lagrangian in terms of gradient of $g_1(M)$ and $g_2(M)$ but the gradient in terms of $M$ is $I$ and $-I$.
The following answer explains it using different view.



What is the KKT condition for constraint $M preceq I$?



I want a method that ties these to views together.



I want to know the general case. I want to handle it directly in terms of M.



I will appreciate, If you introduce me any reference that addresses this issue.










share|cite|improve this question









$endgroup$








  • 2




    $begingroup$
    Doesn't $M=0$ solve the problem?
    $endgroup$
    – A.Γ.
    Dec 21 '18 at 17:29
















1












$begingroup$


Suppose we have the following optimization problem



$$
min_{0preceq M preceq I} y^TMy
$$

where $y in mathbb{R}^n$ and $M in mathbb{R}^{n times n}$ is a positive semi-definite matrix. Notice that the optimization variable is matrix.




Is there any algebraically way to handle this in terms of $M$?




When we write it as the standard form we have the following



$$
min y^TMy
$$

$$
text{s.t.},,,,,, {g_1(M)=-M preceq 0 }
$$

$$
text{s.t.},,,,,, {g_2(M)=I-M preceq 0 }
$$

which is a matrix inequality. If it were vector, it was doable but what would we do when they are in matrix form.



I want to write the first order optimality condition using Lagrangian in terms of gradient of $g_1(M)$ and $g_2(M)$ but the gradient in terms of $M$ is $I$ and $-I$.
The following answer explains it using different view.



What is the KKT condition for constraint $M preceq I$?



I want a method that ties these to views together.



I want to know the general case. I want to handle it directly in terms of M.



I will appreciate, If you introduce me any reference that addresses this issue.










share|cite|improve this question









$endgroup$








  • 2




    $begingroup$
    Doesn't $M=0$ solve the problem?
    $endgroup$
    – A.Γ.
    Dec 21 '18 at 17:29














1












1








1





$begingroup$


Suppose we have the following optimization problem



$$
min_{0preceq M preceq I} y^TMy
$$

where $y in mathbb{R}^n$ and $M in mathbb{R}^{n times n}$ is a positive semi-definite matrix. Notice that the optimization variable is matrix.




Is there any algebraically way to handle this in terms of $M$?




When we write it as the standard form we have the following



$$
min y^TMy
$$

$$
text{s.t.},,,,,, {g_1(M)=-M preceq 0 }
$$

$$
text{s.t.},,,,,, {g_2(M)=I-M preceq 0 }
$$

which is a matrix inequality. If it were vector, it was doable but what would we do when they are in matrix form.



I want to write the first order optimality condition using Lagrangian in terms of gradient of $g_1(M)$ and $g_2(M)$ but the gradient in terms of $M$ is $I$ and $-I$.
The following answer explains it using different view.



What is the KKT condition for constraint $M preceq I$?



I want a method that ties these to views together.



I want to know the general case. I want to handle it directly in terms of M.



I will appreciate, If you introduce me any reference that addresses this issue.










share|cite|improve this question









$endgroup$




Suppose we have the following optimization problem



$$
min_{0preceq M preceq I} y^TMy
$$

where $y in mathbb{R}^n$ and $M in mathbb{R}^{n times n}$ is a positive semi-definite matrix. Notice that the optimization variable is matrix.




Is there any algebraically way to handle this in terms of $M$?




When we write it as the standard form we have the following



$$
min y^TMy
$$

$$
text{s.t.},,,,,, {g_1(M)=-M preceq 0 }
$$

$$
text{s.t.},,,,,, {g_2(M)=I-M preceq 0 }
$$

which is a matrix inequality. If it were vector, it was doable but what would we do when they are in matrix form.



I want to write the first order optimality condition using Lagrangian in terms of gradient of $g_1(M)$ and $g_2(M)$ but the gradient in terms of $M$ is $I$ and $-I$.
The following answer explains it using different view.



What is the KKT condition for constraint $M preceq I$?



I want a method that ties these to views together.



I want to know the general case. I want to handle it directly in terms of M.



I will appreciate, If you introduce me any reference that addresses this issue.







optimization convex-optimization karush-kuhn-tucker






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 21 '18 at 17:27









SaeedSaeed

1,149310




1,149310








  • 2




    $begingroup$
    Doesn't $M=0$ solve the problem?
    $endgroup$
    – A.Γ.
    Dec 21 '18 at 17:29














  • 2




    $begingroup$
    Doesn't $M=0$ solve the problem?
    $endgroup$
    – A.Γ.
    Dec 21 '18 at 17:29








2




2




$begingroup$
Doesn't $M=0$ solve the problem?
$endgroup$
– A.Γ.
Dec 21 '18 at 17:29




$begingroup$
Doesn't $M=0$ solve the problem?
$endgroup$
– A.Γ.
Dec 21 '18 at 17:29










1 Answer
1






active

oldest

votes


















2












$begingroup$

This topic is discussed in Convex Optimization by Boyd and Vandenberghe. See section 5.9.



The key idea here is that you need an appropriate inner product associated with the conic inequality. For positive semidefiniteness constraints, the associated inner product is $langle A, B rangle=mbox{tr}(A^{T}B)$. The Lagrange multiplier for the conic constraint must be a positive semidefinite matrix $Lambda succeq 0$ rather than a scalar. All of the theory for scalar constraints carries over in a straight-forward way to this more general setting.






share|cite|improve this answer









$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3048722%2fhow-to-handle-optimization-problems-when-optimization-variable-is-matrix%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2












    $begingroup$

    This topic is discussed in Convex Optimization by Boyd and Vandenberghe. See section 5.9.



    The key idea here is that you need an appropriate inner product associated with the conic inequality. For positive semidefiniteness constraints, the associated inner product is $langle A, B rangle=mbox{tr}(A^{T}B)$. The Lagrange multiplier for the conic constraint must be a positive semidefinite matrix $Lambda succeq 0$ rather than a scalar. All of the theory for scalar constraints carries over in a straight-forward way to this more general setting.






    share|cite|improve this answer









    $endgroup$


















      2












      $begingroup$

      This topic is discussed in Convex Optimization by Boyd and Vandenberghe. See section 5.9.



      The key idea here is that you need an appropriate inner product associated with the conic inequality. For positive semidefiniteness constraints, the associated inner product is $langle A, B rangle=mbox{tr}(A^{T}B)$. The Lagrange multiplier for the conic constraint must be a positive semidefinite matrix $Lambda succeq 0$ rather than a scalar. All of the theory for scalar constraints carries over in a straight-forward way to this more general setting.






      share|cite|improve this answer









      $endgroup$
















        2












        2








        2





        $begingroup$

        This topic is discussed in Convex Optimization by Boyd and Vandenberghe. See section 5.9.



        The key idea here is that you need an appropriate inner product associated with the conic inequality. For positive semidefiniteness constraints, the associated inner product is $langle A, B rangle=mbox{tr}(A^{T}B)$. The Lagrange multiplier for the conic constraint must be a positive semidefinite matrix $Lambda succeq 0$ rather than a scalar. All of the theory for scalar constraints carries over in a straight-forward way to this more general setting.






        share|cite|improve this answer









        $endgroup$



        This topic is discussed in Convex Optimization by Boyd and Vandenberghe. See section 5.9.



        The key idea here is that you need an appropriate inner product associated with the conic inequality. For positive semidefiniteness constraints, the associated inner product is $langle A, B rangle=mbox{tr}(A^{T}B)$. The Lagrange multiplier for the conic constraint must be a positive semidefinite matrix $Lambda succeq 0$ rather than a scalar. All of the theory for scalar constraints carries over in a straight-forward way to this more general setting.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Dec 21 '18 at 18:12









        Brian BorchersBrian Borchers

        6,29611320




        6,29611320






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3048722%2fhow-to-handle-optimization-problems-when-optimization-variable-is-matrix%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            Puebla de Zaragoza

            Musa