Proving $(bf xtimes ycdot N) z+(ytimes zcdot N) x+(ztimes x cdot N) y= 0$ when $bf x,y,z$ are coplanar and $bf N$ is a unit normal vector The Next CEO of Stack OverflowNorm and Determinant relationProduct of reflections is a rotation, by elementary vector methodsComputing the unit vector for a generalised helixHow to rewrite this trigonometric formula in terms of scalar and vector products between vectors?Second derivative of the position vector in a spherical coordinate systemWhat is the logic/rationale behind the vector cross product?Simplify vector equation $2mathbf c - (mathbf a + mathbf b)times(mathbf a - mathbf b)$Unit Vectors ProblemIs this true, that the angle tangent betweem two vectors is equal to their cross product norm divided by it's inner product?Vector Cross Product.

Is it correct to say moon starry nights?

Car headlights in a world without electricity

Finitely generated matrix groups whose eigenvalues are all algebraic

logical reads on global temp table, but not on session-level temp table

How can the PCs determine if an item is a phylactery?

Is a distribution that is normal, but highly skewed, considered Gaussian?

Could a dragon use its wings to swim?

Can this transistor (2N2222) take 6 V on emitter-base? Am I reading the datasheet incorrectly?

Why does the freezing point matter when picking cooler ice packs?

MT "will strike" & LXX "will watch carefully" (Gen 3:15)?

Why did early computer designers eschew integers?

Does int main() need a declaration on C++?

Avoiding the "not like other girls" trope?

Is it OK to decorate a log book cover?

How badly should I try to prevent a user from XSSing themselves?

What difference does it make matching a word with/without a trailing whitespace?

Why do we say “un seul M” and not “une seule M” even though M is a “consonne”?

Oldie but Goldie

Why doesn't Shulchan Aruch include the laws of destroying fruit trees?

Does the Idaho Potato Commission associate potato skins with healthy eating?

What happens if you break a law in another country outside of that country?

How does a dynamic QR code work?

pgfplots: How to draw a tangent graph below two others?

Do I need to write [sic] when including a quotation with a number less than 10 that isn't written out?



Proving $(bf xtimes ycdot N) z+(ytimes zcdot N) x+(ztimes x cdot N) y= 0$ when $bf x,y,z$ are coplanar and $bf N$ is a unit normal vector



The Next CEO of Stack OverflowNorm and Determinant relationProduct of reflections is a rotation, by elementary vector methodsComputing the unit vector for a generalised helixHow to rewrite this trigonometric formula in terms of scalar and vector products between vectors?Second derivative of the position vector in a spherical coordinate systemWhat is the logic/rationale behind the vector cross product?Simplify vector equation $2mathbf c - (mathbf a + mathbf b)times(mathbf a - mathbf b)$Unit Vectors ProblemIs this true, that the angle tangent betweem two vectors is equal to their cross product norm divided by it's inner product?Vector Cross Product.










9












$begingroup$



Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx mathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54















9












$begingroup$



Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx mathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54













9












9








9


1



$begingroup$



Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx mathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.










share|cite|improve this question











$endgroup$





Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx mathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.







linear-algebra vectors cross-product






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 21 at 9:56









Asaf Karagila

307k33440773




307k33440773










asked Mar 20 at 11:16









AlephNullAlephNull

557110




557110







  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54












  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54







1




1




$begingroup$
What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
$endgroup$
– Widawensen
Mar 20 at 12:51





$begingroup$
What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
$endgroup$
– Widawensen
Mar 20 at 12:51





4




4




$begingroup$
@Widawensen Yes, what else could it mean?
$endgroup$
– Marc van Leeuwen
Mar 20 at 12:53




$begingroup$
@Widawensen Yes, what else could it mean?
$endgroup$
– Marc van Leeuwen
Mar 20 at 12:53




1




1




$begingroup$
@MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
$endgroup$
– Taladris
Mar 21 at 2:54




$begingroup$
@MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
$endgroup$
– Taladris
Mar 21 at 2:54










7 Answers
7






active

oldest

votes


















10












$begingroup$

Here's an observation: If $Q$ is a rotation matrix, then
$$
(Qx) times (Qy) = Q(x times y)
$$



You have to prove that, of course, but it's not too tough. Similarly,
$$
(Qx) cdot (Qy) = x cdot y
$$

and, for a scalar $alpha$, we have
$$
Q (alpha x) = alpha (Q x)
$$



Now suppose that for some vector $v$, we have
$$
(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
$$



Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
    $endgroup$
    – AlephNull
    Mar 20 at 11:55











  • $begingroup$
    Oh I see, you're talking about the elements, not the terms. I understand the solution now.
    $endgroup$
    – AlephNull
    Mar 20 at 13:42







  • 2




    $begingroup$
    By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
    $endgroup$
    – John Hughes
    Mar 20 at 16:51


















11












$begingroup$

If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






share|cite|improve this answer











$endgroup$








  • 2




    $begingroup$
    very nice solution!
    $endgroup$
    – John Hughes
    Mar 20 at 16:54










  • $begingroup$
    Indeed, this is very elegant. So my last remark had some significance!
    $endgroup$
    – AlephNull
    Mar 20 at 17:06










  • $begingroup$
    Thank you both :-)
    $endgroup$
    – Song
    Mar 21 at 7:07


















5












$begingroup$

Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
    $endgroup$
    – AlephNull
    Mar 20 at 13:45



















4












$begingroup$

Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



Now, $$beginalign
& (bf y times bf z cdot bf N); bf x \
= & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
= & (bf y times lambda bf x cdot bf N); bf x \
= & (bf y times bf x cdot bf N), (lambda bf x)
endalign$$

and similarly $$beginalign
& (bf z times bf x cdot bf N); bf y \
= & (bf y times bf x cdot bf N), (mu bf y)
endalign$$

So $$beginalign
& (bf y times bf z cdot bf N); bf x +
(bf z times bf x cdot bf N); bf y \
= & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
= & -(bf x times bf y cdot bf N);z
endalign $$

and the result follows.






share|cite|improve this answer









$endgroup$




















    2












    $begingroup$

    If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




    Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
    is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




    NB this argument doesn't use any properties of $bf N$.






    share|cite|improve this answer









    $endgroup$




















      1












      $begingroup$

      By the properties of the triple product ( circluar shift) we can rearrange formula:



      $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



      All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

      lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



      So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



      Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



      Namely we need to calculate:
      $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






      share|cite|improve this answer











      $endgroup$




















        0












        $begingroup$

        Another approach to the problem uses a formula for triple product.



        $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
        a_1 & b_1 & c_1 \
        a_2 & b_2 & c_2 \
        a_3 & b_3 & c_3 \
        endbmatrix $



        Then consider determinant



        $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



        where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



        Of course such determinant equals to $0$.

        Developing the determinant along the fourth row we obtain:



        $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



        from which the formula for the first component of the vector given in the question follows



        (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



        Similarly the determinants



        $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



        give the second and the third component of the question vector, equal to $0$.






        share|cite|improve this answer











        $endgroup$













          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3155292%2fproving-bf-x-times-y-cdot-n-zy-times-z-cdot-n-xz-times-x-cdot-n-y%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          7 Answers
          7






          active

          oldest

          votes








          7 Answers
          7






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          10












          $begingroup$

          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51















          10












          $begingroup$

          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51













          10












          10








          10





          $begingroup$

          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






          share|cite|improve this answer









          $endgroup$



          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 20 at 11:45









          John HughesJohn Hughes

          65.1k24293




          65.1k24293











          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51
















          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51















          $begingroup$
          Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
          $endgroup$
          – AlephNull
          Mar 20 at 11:55





          $begingroup$
          Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
          $endgroup$
          – AlephNull
          Mar 20 at 11:55













          $begingroup$
          Oh I see, you're talking about the elements, not the terms. I understand the solution now.
          $endgroup$
          – AlephNull
          Mar 20 at 13:42





          $begingroup$
          Oh I see, you're talking about the elements, not the terms. I understand the solution now.
          $endgroup$
          – AlephNull
          Mar 20 at 13:42





          2




          2




          $begingroup$
          By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
          $endgroup$
          – John Hughes
          Mar 20 at 16:51




          $begingroup$
          By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
          $endgroup$
          – John Hughes
          Mar 20 at 16:51











          11












          $begingroup$

          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






          share|cite|improve this answer











          $endgroup$








          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07















          11












          $begingroup$

          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






          share|cite|improve this answer











          $endgroup$








          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07













          11












          11








          11





          $begingroup$

          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






          share|cite|improve this answer











          $endgroup$



          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Mar 20 at 16:22

























          answered Mar 20 at 16:15









          SongSong

          18.5k21651




          18.5k21651







          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07












          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07







          2




          2




          $begingroup$
          very nice solution!
          $endgroup$
          – John Hughes
          Mar 20 at 16:54




          $begingroup$
          very nice solution!
          $endgroup$
          – John Hughes
          Mar 20 at 16:54












          $begingroup$
          Indeed, this is very elegant. So my last remark had some significance!
          $endgroup$
          – AlephNull
          Mar 20 at 17:06




          $begingroup$
          Indeed, this is very elegant. So my last remark had some significance!
          $endgroup$
          – AlephNull
          Mar 20 at 17:06












          $begingroup$
          Thank you both :-)
          $endgroup$
          – Song
          Mar 21 at 7:07




          $begingroup$
          Thank you both :-)
          $endgroup$
          – Song
          Mar 21 at 7:07











          5












          $begingroup$

          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45
















          5












          $begingroup$

          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45














          5












          5








          5





          $begingroup$

          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






          share|cite|improve this answer









          $endgroup$



          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 20 at 12:45









          J.G.J.G.

          32.5k23250




          32.5k23250











          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45

















          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45
















          $begingroup$
          I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
          $endgroup$
          – AlephNull
          Mar 20 at 13:45





          $begingroup$
          I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
          $endgroup$
          – AlephNull
          Mar 20 at 13:45












          4












          $begingroup$

          Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



          Now, $$beginalign
          & (bf y times bf z cdot bf N); bf x \
          = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
          = & (bf y times lambda bf x cdot bf N); bf x \
          = & (bf y times bf x cdot bf N), (lambda bf x)
          endalign$$

          and similarly $$beginalign
          & (bf z times bf x cdot bf N); bf y \
          = & (bf y times bf x cdot bf N), (mu bf y)
          endalign$$

          So $$beginalign
          & (bf y times bf z cdot bf N); bf x +
          (bf z times bf x cdot bf N); bf y \
          = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
          = & -(bf x times bf y cdot bf N);z
          endalign $$

          and the result follows.






          share|cite|improve this answer









          $endgroup$

















            4












            $begingroup$

            Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



            Now, $$beginalign
            & (bf y times bf z cdot bf N); bf x \
            = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
            = & (bf y times lambda bf x cdot bf N); bf x \
            = & (bf y times bf x cdot bf N), (lambda bf x)
            endalign$$

            and similarly $$beginalign
            & (bf z times bf x cdot bf N); bf y \
            = & (bf y times bf x cdot bf N), (mu bf y)
            endalign$$

            So $$beginalign
            & (bf y times bf z cdot bf N); bf x +
            (bf z times bf x cdot bf N); bf y \
            = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
            = & -(bf x times bf y cdot bf N);z
            endalign $$

            and the result follows.






            share|cite|improve this answer









            $endgroup$















              4












              4








              4





              $begingroup$

              Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



              Now, $$beginalign
              & (bf y times bf z cdot bf N); bf x \
              = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
              = & (bf y times lambda bf x cdot bf N); bf x \
              = & (bf y times bf x cdot bf N), (lambda bf x)
              endalign$$

              and similarly $$beginalign
              & (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (mu bf y)
              endalign$$

              So $$beginalign
              & (bf y times bf z cdot bf N); bf x +
              (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
              = & -(bf x times bf y cdot bf N);z
              endalign $$

              and the result follows.






              share|cite|improve this answer









              $endgroup$



              Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



              Now, $$beginalign
              & (bf y times bf z cdot bf N); bf x \
              = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
              = & (bf y times lambda bf x cdot bf N); bf x \
              = & (bf y times bf x cdot bf N), (lambda bf x)
              endalign$$

              and similarly $$beginalign
              & (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (mu bf y)
              endalign$$

              So $$beginalign
              & (bf y times bf z cdot bf N); bf x +
              (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
              = & -(bf x times bf y cdot bf N);z
              endalign $$

              and the result follows.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Mar 20 at 19:43









              alephzeroalephzero

              72037




              72037





















                  2












                  $begingroup$

                  If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                  Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                  is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                  NB this argument doesn't use any properties of $bf N$.






                  share|cite|improve this answer









                  $endgroup$

















                    2












                    $begingroup$

                    If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                    Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                    is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                    NB this argument doesn't use any properties of $bf N$.






                    share|cite|improve this answer









                    $endgroup$















                      2












                      2








                      2





                      $begingroup$

                      If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                      Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                      is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                      NB this argument doesn't use any properties of $bf N$.






                      share|cite|improve this answer









                      $endgroup$



                      If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                      Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                      is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                      NB this argument doesn't use any properties of $bf N$.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered Mar 21 at 5:02









                      TravisTravis

                      63.8k769151




                      63.8k769151





















                          1












                          $begingroup$

                          By the properties of the triple product ( circluar shift) we can rearrange formula:



                          $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                          All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                          lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                          So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                          Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                          Namely we need to calculate:
                          $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






                          share|cite|improve this answer











                          $endgroup$

















                            1












                            $begingroup$

                            By the properties of the triple product ( circluar shift) we can rearrange formula:



                            $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                            All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                            lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                            So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                            Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                            Namely we need to calculate:
                            $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






                            share|cite|improve this answer











                            $endgroup$















                              1












                              1








                              1





                              $begingroup$

                              By the properties of the triple product ( circluar shift) we can rearrange formula:



                              $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                              All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                              lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                              So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                              Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                              Namely we need to calculate:
                              $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






                              share|cite|improve this answer











                              $endgroup$



                              By the properties of the triple product ( circluar shift) we can rearrange formula:



                              $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                              All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                              lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                              So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                              Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                              Namely we need to calculate:
                              $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited Mar 21 at 9:01

























                              answered Mar 20 at 13:42









                              WidawensenWidawensen

                              4,75831446




                              4,75831446





















                                  0












                                  $begingroup$

                                  Another approach to the problem uses a formula for triple product.



                                  $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                  a_1 & b_1 & c_1 \
                                  a_2 & b_2 & c_2 \
                                  a_3 & b_3 & c_3 \
                                  endbmatrix $



                                  Then consider determinant



                                  $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                  where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                  Of course such determinant equals to $0$.

                                  Developing the determinant along the fourth row we obtain:



                                  $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                  from which the formula for the first component of the vector given in the question follows



                                  (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                  Similarly the determinants



                                  $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                  give the second and the third component of the question vector, equal to $0$.






                                  share|cite|improve this answer











                                  $endgroup$

















                                    0












                                    $begingroup$

                                    Another approach to the problem uses a formula for triple product.



                                    $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                    a_1 & b_1 & c_1 \
                                    a_2 & b_2 & c_2 \
                                    a_3 & b_3 & c_3 \
                                    endbmatrix $



                                    Then consider determinant



                                    $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                    where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                    Of course such determinant equals to $0$.

                                    Developing the determinant along the fourth row we obtain:



                                    $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                    from which the formula for the first component of the vector given in the question follows



                                    (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                    Similarly the determinants



                                    $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                    give the second and the third component of the question vector, equal to $0$.






                                    share|cite|improve this answer











                                    $endgroup$















                                      0












                                      0








                                      0





                                      $begingroup$

                                      Another approach to the problem uses a formula for triple product.



                                      $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                      a_1 & b_1 & c_1 \
                                      a_2 & b_2 & c_2 \
                                      a_3 & b_3 & c_3 \
                                      endbmatrix $



                                      Then consider determinant



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                      where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                      Of course such determinant equals to $0$.

                                      Developing the determinant along the fourth row we obtain:



                                      $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                      from which the formula for the first component of the vector given in the question follows



                                      (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                      Similarly the determinants



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                      give the second and the third component of the question vector, equal to $0$.






                                      share|cite|improve this answer











                                      $endgroup$



                                      Another approach to the problem uses a formula for triple product.



                                      $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                      a_1 & b_1 & c_1 \
                                      a_2 & b_2 & c_2 \
                                      a_3 & b_3 & c_3 \
                                      endbmatrix $



                                      Then consider determinant



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                      where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                      Of course such determinant equals to $0$.

                                      Developing the determinant along the fourth row we obtain:



                                      $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                      from which the formula for the first component of the vector given in the question follows



                                      (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                      Similarly the determinants



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                      give the second and the third component of the question vector, equal to $0$.







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited Mar 29 at 14:04

























                                      answered Mar 29 at 13:57









                                      WidawensenWidawensen

                                      4,75831446




                                      4,75831446



























                                          draft saved

                                          draft discarded
















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid


                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.

                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3155292%2fproving-bf-x-times-y-cdot-n-zy-times-z-cdot-n-xz-times-x-cdot-n-y%23new-answer', 'question_page');

                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Solar Wings Breeze Design and development Specifications (Breeze) References Navigation menu1368-485X"Hang glider: Breeze (Solar Wings)"e

                                          Kathakali Contents Etymology and nomenclature History Repertoire Songs and musical instruments Traditional plays Styles: Sampradayam Training centers and awards Relationship to other dance forms See also Notes References External links Navigation menueThe Illustrated Encyclopedia of Hinduism: A-MSouth Asian Folklore: An EncyclopediaRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to PlayKathakali Dance-drama: Where Gods and Demons Come to Play10.1353/atj.2005.0004The Illustrated Encyclopedia of Hinduism: A-MEncyclopedia of HinduismKathakali Dance-drama: Where Gods and Demons Come to PlaySonic Liturgy: Ritual and Music in Hindu Tradition"The Mirror of Gesture"Kathakali Dance-drama: Where Gods and Demons Come to Play"Kathakali"Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceMedieval Indian Literature: An AnthologyThe Oxford Companion to Indian TheatreSouth Asian Folklore: An Encyclopedia : Afghanistan, Bangladesh, India, Nepal, Pakistan, Sri LankaThe Rise of Performance Studies: Rethinking Richard Schechner's Broad SpectrumIndian Theatre: Traditions of PerformanceModern Asian Theatre and Performance 1900-2000Critical Theory and PerformanceBetween Theater and AnthropologyKathakali603847011Indian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceIndian Theatre: Traditions of PerformanceBetween Theater and AnthropologyBetween Theater and AnthropologyNambeesan Smaraka AwardsArchivedThe Cambridge Guide to TheatreRoutledge International Encyclopedia of Women: Global Women's Issues and KnowledgeThe Garland Encyclopedia of World Music: South Asia : the Indian subcontinentThe Ethos of Noh: Actors and Their Art10.2307/1145740By Means of Performance: Intercultural Studies of Theatre and Ritual10.1017/s204912550000100xReconceiving the Renaissance: A Critical ReaderPerformance TheoryListening to Theatre: The Aural Dimension of Beijing Opera10.2307/1146013Kathakali: The Art of the Non-WorldlyOn KathakaliKathakali, the dance theatreThe Kathakali Complex: Performance & StructureKathakali Dance-Drama: Where Gods and Demons Come to Play10.1093/obo/9780195399318-0071Drama and Ritual of Early Hinduism"In the Shadow of Hollywood Orientalism: Authentic East Indian Dancing"10.1080/08949460490274013Sanskrit Play Production in Ancient IndiaIndian Music: History and StructureBharata, the Nāṭyaśāstra233639306Table of Contents2238067286469807Dance In Indian Painting10.2307/32047833204783Kathakali Dance-Theatre: A Visual Narrative of Sacred Indian MimeIndian Classical Dance: The Renaissance and BeyondKathakali: an indigenous art-form of Keralaeee

                                          Method to test if a number is a perfect power? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Detecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers